2025-06-11 14:00:58.801560 | Job console starting 2025-06-11 14:00:58.812766 | Updating git repos 2025-06-11 14:00:58.928735 | Cloning repos into workspace 2025-06-11 14:00:59.063700 | Restoring repo states 2025-06-11 14:00:59.085051 | Merging changes 2025-06-11 14:00:59.085072 | Checking out repos 2025-06-11 14:00:59.394562 | Preparing playbooks 2025-06-11 14:01:00.140180 | Running Ansible setup 2025-06-11 14:01:04.427440 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-11 14:01:05.229257 | 2025-06-11 14:01:05.229427 | PLAY [Base pre] 2025-06-11 14:01:05.247394 | 2025-06-11 14:01:05.247542 | TASK [Setup log path fact] 2025-06-11 14:01:05.277252 | orchestrator | ok 2025-06-11 14:01:05.295503 | 2025-06-11 14:01:05.295659 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-11 14:01:05.343048 | orchestrator | ok 2025-06-11 14:01:05.363482 | 2025-06-11 14:01:05.363638 | TASK [emit-job-header : Print job information] 2025-06-11 14:01:05.408577 | # Job Information 2025-06-11 14:01:05.408792 | Ansible Version: 2.16.14 2025-06-11 14:01:05.408856 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-11 14:01:05.408903 | Pipeline: post 2025-06-11 14:01:05.408935 | Executor: 521e9411259a 2025-06-11 14:01:05.408964 | Triggered by: https://github.com/osism/testbed/commit/6264d51a16aeebbd85fd0475f5de05969ce0ab2a 2025-06-11 14:01:05.408994 | Event ID: 7ca9a26e-46cc-11f0-8a60-ff5efb7c008e 2025-06-11 14:01:05.417152 | 2025-06-11 14:01:05.417277 | LOOP [emit-job-header : Print node information] 2025-06-11 14:01:05.545518 | orchestrator | ok: 2025-06-11 14:01:05.545812 | orchestrator | # Node Information 2025-06-11 14:01:05.545872 | orchestrator | Inventory Hostname: orchestrator 2025-06-11 14:01:05.545899 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-11 14:01:05.545922 | orchestrator | Username: zuul-testbed06 2025-06-11 14:01:05.546660 | orchestrator | Distro: Debian 12.11 2025-06-11 14:01:05.546738 | orchestrator | Provider: static-testbed 2025-06-11 14:01:05.546765 | orchestrator | Region: 2025-06-11 14:01:05.546789 | orchestrator | Label: testbed-orchestrator 2025-06-11 14:01:05.546811 | orchestrator | Product Name: OpenStack Nova 2025-06-11 14:01:05.546900 | orchestrator | Interface IP: 81.163.193.140 2025-06-11 14:01:05.581394 | 2025-06-11 14:01:05.581583 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-11 14:01:06.082692 | orchestrator -> localhost | changed 2025-06-11 14:01:06.095026 | 2025-06-11 14:01:06.095169 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-11 14:01:07.335885 | orchestrator -> localhost | changed 2025-06-11 14:01:07.370259 | 2025-06-11 14:01:07.370412 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-11 14:01:07.720257 | orchestrator -> localhost | ok 2025-06-11 14:01:07.729366 | 2025-06-11 14:01:07.729514 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-11 14:01:07.771537 | orchestrator | ok 2025-06-11 14:01:07.796164 | orchestrator | included: /var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-11 14:01:07.806019 | 2025-06-11 14:01:07.806155 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-11 14:01:09.188894 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-11 14:01:09.189996 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/e49b0e958fa6455e9528dd04eee221c9_id_rsa 2025-06-11 14:01:09.190177 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/e49b0e958fa6455e9528dd04eee221c9_id_rsa.pub 2025-06-11 14:01:09.190458 | orchestrator -> localhost | The key fingerprint is: 2025-06-11 14:01:09.190550 | orchestrator -> localhost | SHA256:+rMpvV/Cw/uZ8bBAgUFaZBV/eRBuZaesYGBtPP7mirg zuul-build-sshkey 2025-06-11 14:01:09.190612 | orchestrator -> localhost | The key's randomart image is: 2025-06-11 14:01:09.190956 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-11 14:01:09.191037 | orchestrator -> localhost | | oO+o. o.+| 2025-06-11 14:01:09.191299 | orchestrator -> localhost | | = +=. o *.| 2025-06-11 14:01:09.191367 | orchestrator -> localhost | | . .o+.. B .| 2025-06-11 14:01:09.191422 | orchestrator -> localhost | | ..o + . | 2025-06-11 14:01:09.191650 | orchestrator -> localhost | | S ... | 2025-06-11 14:01:09.191734 | orchestrator -> localhost | | . + o | 2025-06-11 14:01:09.191791 | orchestrator -> localhost | | .. *o+ | 2025-06-11 14:01:09.191868 | orchestrator -> localhost | | .ooo *.B | 2025-06-11 14:01:09.191927 | orchestrator -> localhost | | Eo*=+o= . | 2025-06-11 14:01:09.192165 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-11 14:01:09.192322 | orchestrator -> localhost | ok: Runtime: 0:00:00.849895 2025-06-11 14:01:09.209704 | 2025-06-11 14:01:09.209856 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-11 14:01:09.247225 | orchestrator | ok 2025-06-11 14:01:09.272800 | orchestrator | included: /var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-11 14:01:09.290363 | 2025-06-11 14:01:09.290512 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-11 14:01:09.315746 | orchestrator | skipping: Conditional result was False 2025-06-11 14:01:09.333487 | 2025-06-11 14:01:09.333640 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-11 14:01:10.364348 | orchestrator | changed 2025-06-11 14:01:10.374774 | 2025-06-11 14:01:10.374985 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-11 14:01:10.650889 | orchestrator | ok 2025-06-11 14:01:10.659548 | 2025-06-11 14:01:10.659694 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-11 14:01:11.086329 | orchestrator | ok 2025-06-11 14:01:11.094507 | 2025-06-11 14:01:11.094643 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-11 14:01:11.499209 | orchestrator | ok 2025-06-11 14:01:11.509690 | 2025-06-11 14:01:11.509833 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-11 14:01:11.545696 | orchestrator | skipping: Conditional result was False 2025-06-11 14:01:11.563549 | 2025-06-11 14:01:11.563784 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-11 14:01:12.096500 | orchestrator -> localhost | changed 2025-06-11 14:01:12.119040 | 2025-06-11 14:01:12.119513 | TASK [add-build-sshkey : Add back temp key] 2025-06-11 14:01:12.512008 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/e49b0e958fa6455e9528dd04eee221c9_id_rsa (zuul-build-sshkey) 2025-06-11 14:01:12.512511 | orchestrator -> localhost | ok: Runtime: 0:00:00.019174 2025-06-11 14:01:12.525270 | 2025-06-11 14:01:12.525416 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-11 14:01:12.949031 | orchestrator | ok 2025-06-11 14:01:12.961799 | 2025-06-11 14:01:12.962195 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-11 14:01:13.004740 | orchestrator | skipping: Conditional result was False 2025-06-11 14:01:13.101035 | 2025-06-11 14:01:13.101318 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-11 14:01:13.519290 | orchestrator | ok 2025-06-11 14:01:13.530916 | 2025-06-11 14:01:13.531040 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-11 14:01:13.579235 | orchestrator | ok 2025-06-11 14:01:13.590257 | 2025-06-11 14:01:13.590394 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-11 14:01:13.879373 | orchestrator -> localhost | ok 2025-06-11 14:01:13.896356 | 2025-06-11 14:01:13.896535 | TASK [validate-host : Collect information about the host] 2025-06-11 14:01:15.128717 | orchestrator | ok 2025-06-11 14:01:15.142806 | 2025-06-11 14:01:15.142993 | TASK [validate-host : Sanitize hostname] 2025-06-11 14:01:15.220588 | orchestrator | ok 2025-06-11 14:01:15.229103 | 2025-06-11 14:01:15.229249 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-11 14:01:15.806328 | orchestrator -> localhost | changed 2025-06-11 14:01:15.823748 | 2025-06-11 14:01:15.824091 | TASK [validate-host : Collect information about zuul worker] 2025-06-11 14:01:16.304534 | orchestrator | ok 2025-06-11 14:01:16.312154 | 2025-06-11 14:01:16.312329 | TASK [validate-host : Write out all zuul information for each host] 2025-06-11 14:01:16.867556 | orchestrator -> localhost | changed 2025-06-11 14:01:16.879337 | 2025-06-11 14:01:16.879455 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-11 14:01:17.194718 | orchestrator | ok 2025-06-11 14:01:17.208506 | 2025-06-11 14:01:17.208679 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-11 14:01:48.178542 | orchestrator | changed: 2025-06-11 14:01:48.178784 | orchestrator | .d..t...... src/ 2025-06-11 14:01:48.178820 | orchestrator | .d..t...... src/github.com/ 2025-06-11 14:01:48.179094 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-11 14:01:48.179119 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-11 14:01:48.179141 | orchestrator | RedHat.yml 2025-06-11 14:01:48.192632 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-11 14:01:48.192649 | orchestrator | RedHat.yml 2025-06-11 14:01:48.192701 | orchestrator | = 2.2.0"... 2025-06-11 14:02:05.956076 | orchestrator | 14:02:05.955 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-11 14:02:06.040168 | orchestrator | 14:02:06.039 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-11 14:02:07.433586 | orchestrator | 14:02:07.433 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-11 14:02:08.435633 | orchestrator | 14:02:08.435 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-11 14:02:09.440538 | orchestrator | 14:02:09.440 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-11 14:02:10.275210 | orchestrator | 14:02:10.275 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-11 14:02:11.097053 | orchestrator | 14:02:11.096 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-11 14:02:12.126475 | orchestrator | 14:02:12.126 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-11 14:02:12.126572 | orchestrator | 14:02:12.126 STDOUT terraform: Providers are signed by their developers. 2025-06-11 14:02:12.126580 | orchestrator | 14:02:12.126 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-11 14:02:12.126588 | orchestrator | 14:02:12.126 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-11 14:02:12.127056 | orchestrator | 14:02:12.126 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-11 14:02:12.127168 | orchestrator | 14:02:12.126 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-11 14:02:12.127182 | orchestrator | 14:02:12.126 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-11 14:02:12.127190 | orchestrator | 14:02:12.126 STDOUT terraform: you run "tofu init" in the future. 2025-06-11 14:02:12.130675 | orchestrator | 14:02:12.130 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-11 14:02:12.130771 | orchestrator | 14:02:12.130 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-11 14:02:12.130783 | orchestrator | 14:02:12.130 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-11 14:02:12.130788 | orchestrator | 14:02:12.130 STDOUT terraform: should now work. 2025-06-11 14:02:12.130793 | orchestrator | 14:02:12.130 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-11 14:02:12.130797 | orchestrator | 14:02:12.130 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-11 14:02:12.130803 | orchestrator | 14:02:12.130 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-11 14:02:12.266183 | orchestrator | 14:02:12.264 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-11 14:02:12.266273 | orchestrator | 14:02:12.264 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-11 14:02:12.465097 | orchestrator | 14:02:12.464 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-11 14:02:12.465180 | orchestrator | 14:02:12.464 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-11 14:02:12.465193 | orchestrator | 14:02:12.464 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-11 14:02:12.465199 | orchestrator | 14:02:12.464 STDOUT terraform: for this configuration. 2025-06-11 14:02:12.630517 | orchestrator | 14:02:12.630 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-11 14:02:12.630615 | orchestrator | 14:02:12.630 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-11 14:02:12.718574 | orchestrator | 14:02:12.718 STDOUT terraform: ci.auto.tfvars 2025-06-11 14:02:12.724510 | orchestrator | 14:02:12.724 STDOUT terraform: default_custom.tf 2025-06-11 14:02:12.882787 | orchestrator | 14:02:12.882 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-11 14:02:13.798837 | orchestrator | 14:02:13.798 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-11 14:02:14.314357 | orchestrator | 14:02:14.313 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-11 14:02:14.705332 | orchestrator | 14:02:14.701 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-11 14:02:14.705381 | orchestrator | 14:02:14.701 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-11 14:02:14.705389 | orchestrator | 14:02:14.701 STDOUT terraform:  + create 2025-06-11 14:02:14.705394 | orchestrator | 14:02:14.701 STDOUT terraform:  <= read (data resources) 2025-06-11 14:02:14.705399 | orchestrator | 14:02:14.701 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-11 14:02:14.705403 | orchestrator | 14:02:14.701 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-11 14:02:14.705407 | orchestrator | 14:02:14.701 STDOUT terraform:  # (config refers to values not yet known) 2025-06-11 14:02:14.705411 | orchestrator | 14:02:14.701 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-11 14:02:14.705415 | orchestrator | 14:02:14.701 STDOUT terraform:  + checksum = (known after apply) 2025-06-11 14:02:14.705419 | orchestrator | 14:02:14.701 STDOUT terraform:  + created_at = (known after apply) 2025-06-11 14:02:14.705423 | orchestrator | 14:02:14.701 STDOUT terraform:  + file = (known after apply) 2025-06-11 14:02:14.705426 | orchestrator | 14:02:14.701 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.705430 | orchestrator | 14:02:14.701 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.705452 | orchestrator | 14:02:14.701 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-11 14:02:14.705456 | orchestrator | 14:02:14.701 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-11 14:02:14.705460 | orchestrator | 14:02:14.701 STDOUT terraform:  + most_recent = true 2025-06-11 14:02:14.705463 | orchestrator | 14:02:14.701 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.705467 | orchestrator | 14:02:14.701 STDOUT terraform:  + protected = (known after apply) 2025-06-11 14:02:14.705471 | orchestrator | 14:02:14.701 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.705475 | orchestrator | 14:02:14.701 STDOUT terraform:  + schema = (known after apply) 2025-06-11 14:02:14.705479 | orchestrator | 14:02:14.701 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-11 14:02:14.705482 | orchestrator | 14:02:14.701 STDOUT terraform:  + tags = (known after apply) 2025-06-11 14:02:14.705486 | orchestrator | 14:02:14.702 STDOUT terraform:  + updated_at = (known after apply) 2025-06-11 14:02:14.705490 | orchestrator | 14:02:14.702 STDOUT terraform:  } 2025-06-11 14:02:14.705496 | orchestrator | 14:02:14.702 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-11 14:02:14.705500 | orchestrator | 14:02:14.702 STDOUT terraform:  # (config refers to values not yet known) 2025-06-11 14:02:14.705504 | orchestrator | 14:02:14.702 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-11 14:02:14.705508 | orchestrator | 14:02:14.702 STDOUT terraform:  + checksum = (known after apply) 2025-06-11 14:02:14.705511 | orchestrator | 14:02:14.702 STDOUT terraform:  + created_at = (known after apply) 2025-06-11 14:02:14.705515 | orchestrator | 14:02:14.702 STDOUT terraform:  + file = (known after apply) 2025-06-11 14:02:14.705519 | orchestrator | 14:02:14.702 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.705522 | orchestrator | 14:02:14.702 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.705526 | orchestrator | 14:02:14.702 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-11 14:02:14.705529 | orchestrator | 14:02:14.702 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-11 14:02:14.705542 | orchestrator | 14:02:14.702 STDOUT terraform:  + most_recent = true 2025-06-11 14:02:14.705545 | orchestrator | 14:02:14.702 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.705549 | orchestrator | 14:02:14.702 STDOUT terraform:  + protected = (known after apply) 2025-06-11 14:02:14.705553 | orchestrator | 14:02:14.702 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.705569 | orchestrator | 14:02:14.702 STDOUT terraform:  + schema = (known after apply) 2025-06-11 14:02:14.705573 | orchestrator | 14:02:14.702 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-11 14:02:14.705576 | orchestrator | 14:02:14.702 STDOUT terraform:  + tags = (known after apply) 2025-06-11 14:02:14.705580 | orchestrator | 14:02:14.702 STDOUT terraform:  + updated_at = (known after apply) 2025-06-11 14:02:14.705584 | orchestrator | 14:02:14.702 STDOUT terraform:  } 2025-06-11 14:02:14.705753 | orchestrator | 14:02:14.705 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-11 14:02:14.705802 | orchestrator | 14:02:14.705 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-11 14:02:14.705845 | orchestrator | 14:02:14.705 STDOUT terraform:  + content = (known after apply) 2025-06-11 14:02:14.705897 | orchestrator | 14:02:14.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-11 14:02:14.705933 | orchestrator | 14:02:14.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-11 14:02:14.705971 | orchestrator | 14:02:14.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-11 14:02:14.706011 | orchestrator | 14:02:14.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-11 14:02:14.706089 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-11 14:02:14.706130 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-11 14:02:14.706161 | orchestrator | 14:02:14.706 STDOUT terraform:  + directory_permission = "0777" 2025-06-11 14:02:14.706190 | orchestrator | 14:02:14.706 STDOUT terraform:  + file_permission = "0644" 2025-06-11 14:02:14.706228 | orchestrator | 14:02:14.706 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-11 14:02:14.706268 | orchestrator | 14:02:14.706 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.706277 | orchestrator | 14:02:14.706 STDOUT terraform:  } 2025-06-11 14:02:14.706322 | orchestrator | 14:02:14.706 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-11 14:02:14.706350 | orchestrator | 14:02:14.706 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-11 14:02:14.706388 | orchestrator | 14:02:14.706 STDOUT terraform:  + content = (known after apply) 2025-06-11 14:02:14.706423 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-11 14:02:14.706459 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-11 14:02:14.706497 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-11 14:02:14.706533 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-11 14:02:14.706569 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-11 14:02:14.706609 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-11 14:02:14.706634 | orchestrator | 14:02:14.706 STDOUT terraform:  + directory_permission = "0777" 2025-06-11 14:02:14.706659 | orchestrator | 14:02:14.706 STDOUT terraform:  + file_permission = "0644" 2025-06-11 14:02:14.706692 | orchestrator | 14:02:14.706 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-11 14:02:14.706729 | orchestrator | 14:02:14.706 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.706736 | orchestrator | 14:02:14.706 STDOUT terraform:  } 2025-06-11 14:02:14.706769 | orchestrator | 14:02:14.706 STDOUT terraform:  # local_file.inventory will be created 2025-06-11 14:02:14.706790 | orchestrator | 14:02:14.706 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-11 14:02:14.706827 | orchestrator | 14:02:14.706 STDOUT terraform:  + content = (known after apply) 2025-06-11 14:02:14.706877 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-11 14:02:14.706912 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-11 14:02:14.706948 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-11 14:02:14.706985 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-11 14:02:14.707021 | orchestrator | 14:02:14.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-11 14:02:14.707057 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-11 14:02:14.707084 | orchestrator | 14:02:14.707 STDOUT terraform:  + directory_permission = "0777" 2025-06-11 14:02:14.707109 | orchestrator | 14:02:14.707 STDOUT terraform:  + file_permission = "0644" 2025-06-11 14:02:14.707141 | orchestrator | 14:02:14.707 STDOUT terraform:  + filename = "inventory.ci" 2025-06-11 14:02:14.707178 | orchestrator | 14:02:14.707 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.707186 | orchestrator | 14:02:14.707 STDOUT terraform:  } 2025-06-11 14:02:14.707220 | orchestrator | 14:02:14.707 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-11 14:02:14.707251 | orchestrator | 14:02:14.707 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-11 14:02:14.707284 | orchestrator | 14:02:14.707 STDOUT terraform:  + content = (sensitive value) 2025-06-11 14:02:14.707321 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-11 14:02:14.707357 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-11 14:02:14.707394 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-11 14:02:14.707430 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-11 14:02:14.707465 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-11 14:02:14.707504 | orchestrator | 14:02:14.707 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-11 14:02:14.707528 | orchestrator | 14:02:14.707 STDOUT terraform:  + directory_permission = "0700" 2025-06-11 14:02:14.707552 | orchestrator | 14:02:14.707 STDOUT terraform:  + file_permission = "0600" 2025-06-11 14:02:14.707582 | orchestrator | 14:02:14.707 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-11 14:02:14.707622 | orchestrator | 14:02:14.707 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.707629 | orchestrator | 14:02:14.707 STDOUT terraform:  } 2025-06-11 14:02:14.707662 | orchestrator | 14:02:14.707 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-11 14:02:14.707691 | orchestrator | 14:02:14.707 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-11 14:02:14.707714 | orchestrator | 14:02:14.707 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.707721 | orchestrator | 14:02:14.707 STDOUT terraform:  } 2025-06-11 14:02:14.707776 | orchestrator | 14:02:14.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-11 14:02:14.707830 | orchestrator | 14:02:14.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-11 14:02:14.707893 | orchestrator | 14:02:14.707 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.707899 | orchestrator | 14:02:14.707 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.707929 | orchestrator | 14:02:14.707 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.707967 | orchestrator | 14:02:14.707 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.708004 | orchestrator | 14:02:14.707 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.708048 | orchestrator | 14:02:14.707 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-11 14:02:14.708086 | orchestrator | 14:02:14.708 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.708108 | orchestrator | 14:02:14.708 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.708138 | orchestrator | 14:02:14.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.708162 | orchestrator | 14:02:14.708 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.708170 | orchestrator | 14:02:14.708 STDOUT terraform:  } 2025-06-11 14:02:14.708219 | orchestrator | 14:02:14.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-11 14:02:14.708266 | orchestrator | 14:02:14.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.708306 | orchestrator | 14:02:14.708 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.708330 | orchestrator | 14:02:14.708 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.708366 | orchestrator | 14:02:14.708 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.708404 | orchestrator | 14:02:14.708 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.708440 | orchestrator | 14:02:14.708 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.708485 | orchestrator | 14:02:14.708 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-11 14:02:14.708524 | orchestrator | 14:02:14.708 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.708544 | orchestrator | 14:02:14.708 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.708569 | orchestrator | 14:02:14.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.708593 | orchestrator | 14:02:14.708 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.708602 | orchestrator | 14:02:14.708 STDOUT terraform:  } 2025-06-11 14:02:14.708677 | orchestrator | 14:02:14.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-11 14:02:14.708723 | orchestrator | 14:02:14.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.708760 | orchestrator | 14:02:14.708 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.708773 | orchestrator | 14:02:14.708 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.708820 | orchestrator | 14:02:14.708 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.708871 | orchestrator | 14:02:14.708 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.708923 | orchestrator | 14:02:14.708 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.708971 | orchestrator | 14:02:14.708 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-11 14:02:14.709008 | orchestrator | 14:02:14.708 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.709033 | orchestrator | 14:02:14.709 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.709060 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.709085 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.709092 | orchestrator | 14:02:14.709 STDOUT terraform:  } 2025-06-11 14:02:14.709142 | orchestrator | 14:02:14.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-11 14:02:14.709188 | orchestrator | 14:02:14.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.709225 | orchestrator | 14:02:14.709 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.709256 | orchestrator | 14:02:14.709 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.709290 | orchestrator | 14:02:14.709 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.709328 | orchestrator | 14:02:14.709 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.709366 | orchestrator | 14:02:14.709 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.709412 | orchestrator | 14:02:14.709 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-11 14:02:14.709450 | orchestrator | 14:02:14.709 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.709472 | orchestrator | 14:02:14.709 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.709497 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.709524 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.709532 | orchestrator | 14:02:14.709 STDOUT terraform:  } 2025-06-11 14:02:14.709591 | orchestrator | 14:02:14.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-11 14:02:14.709636 | orchestrator | 14:02:14.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.709673 | orchestrator | 14:02:14.709 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.709698 | orchestrator | 14:02:14.709 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.709735 | orchestrator | 14:02:14.709 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.709774 | orchestrator | 14:02:14.709 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.709810 | orchestrator | 14:02:14.709 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.709873 | orchestrator | 14:02:14.709 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-11 14:02:14.709895 | orchestrator | 14:02:14.709 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.709922 | orchestrator | 14:02:14.709 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.709972 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.710000 | orchestrator | 14:02:14.709 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.710008 | orchestrator | 14:02:14.709 STDOUT terraform:  } 2025-06-11 14:02:14.710070 | orchestrator | 14:02:14.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-11 14:02:14.710121 | orchestrator | 14:02:14.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.710157 | orchestrator | 14:02:14.710 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.710182 | orchestrator | 14:02:14.710 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.710218 | orchestrator | 14:02:14.710 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.710258 | orchestrator | 14:02:14.710 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.710294 | orchestrator | 14:02:14.710 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.710339 | orchestrator | 14:02:14.710 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-11 14:02:14.710374 | orchestrator | 14:02:14.710 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.710397 | orchestrator | 14:02:14.710 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.710423 | orchestrator | 14:02:14.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.710451 | orchestrator | 14:02:14.710 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.710459 | orchestrator | 14:02:14.710 STDOUT terraform:  } 2025-06-11 14:02:14.710509 | orchestrator | 14:02:14.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-11 14:02:14.710556 | orchestrator | 14:02:14.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-11 14:02:14.710593 | orchestrator | 14:02:14.710 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.710616 | orchestrator | 14:02:14.710 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.710653 | orchestrator | 14:02:14.710 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.710691 | orchestrator | 14:02:14.710 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.710728 | orchestrator | 14:02:14.710 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.710773 | orchestrator | 14:02:14.710 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-11 14:02:14.710812 | orchestrator | 14:02:14.710 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.710833 | orchestrator | 14:02:14.710 STDOUT terraform:  + size = 80 2025-06-11 14:02:14.710892 | orchestrator | 14:02:14.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.710898 | orchestrator | 14:02:14.710 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.710904 | orchestrator | 14:02:14.710 STDOUT terraform:  } 2025-06-11 14:02:14.710947 | orchestrator | 14:02:14.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-11 14:02:14.710991 | orchestrator | 14:02:14.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.711070 | orchestrator | 14:02:14.710 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.711079 | orchestrator | 14:02:14.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.711122 | orchestrator | 14:02:14.711 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.711156 | orchestrator | 14:02:14.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.711200 | orchestrator | 14:02:14.711 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-11 14:02:14.711240 | orchestrator | 14:02:14.711 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.711264 | orchestrator | 14:02:14.711 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.711291 | orchestrator | 14:02:14.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.711317 | orchestrator | 14:02:14.711 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.711326 | orchestrator | 14:02:14.711 STDOUT terraform:  } 2025-06-11 14:02:14.711372 | orchestrator | 14:02:14.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-11 14:02:14.711416 | orchestrator | 14:02:14.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.711452 | orchestrator | 14:02:14.711 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.711476 | orchestrator | 14:02:14.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.711515 | orchestrator | 14:02:14.711 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.711552 | orchestrator | 14:02:14.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.711593 | orchestrator | 14:02:14.711 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-11 14:02:14.711630 | orchestrator | 14:02:14.711 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.711655 | orchestrator | 14:02:14.711 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.711678 | orchestrator | 14:02:14.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.711703 | orchestrator | 14:02:14.711 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.711711 | orchestrator | 14:02:14.711 STDOUT terraform:  } 2025-06-11 14:02:14.711758 | orchestrator | 14:02:14.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-11 14:02:14.711805 | orchestrator | 14:02:14.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.711840 | orchestrator | 14:02:14.711 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.711867 | orchestrator | 14:02:14.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.711908 | orchestrator | 14:02:14.711 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.711947 | orchestrator | 14:02:14.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.712016 | orchestrator | 14:02:14.711 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-11 14:02:14.712024 | orchestrator | 14:02:14.711 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.712030 | orchestrator | 14:02:14.712 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.712066 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.712076 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.712096 | orchestrator | 14:02:14.712 STDOUT terraform:  } 2025-06-11 14:02:14.712140 | orchestrator | 14:02:14.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-11 14:02:14.712183 | orchestrator | 14:02:14.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.712220 | orchestrator | 14:02:14.712 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.712248 | orchestrator | 14:02:14.712 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.712286 | orchestrator | 14:02:14.712 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.712321 | orchestrator | 14:02:14.712 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.712361 | orchestrator | 14:02:14.712 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-11 14:02:14.712397 | orchestrator | 14:02:14.712 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.712418 | orchestrator | 14:02:14.712 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.712444 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.712468 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.712476 | orchestrator | 14:02:14.712 STDOUT terraform:  } 2025-06-11 14:02:14.712523 | orchestrator | 14:02:14.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-11 14:02:14.712573 | orchestrator | 14:02:14.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.712611 | orchestrator | 14:02:14.712 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.712636 | orchestrator | 14:02:14.712 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.712674 | orchestrator | 14:02:14.712 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.712709 | orchestrator | 14:02:14.712 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.712750 | orchestrator | 14:02:14.712 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-11 14:02:14.712787 | orchestrator | 14:02:14.712 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.712800 | orchestrator | 14:02:14.712 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.712829 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.712865 | orchestrator | 14:02:14.712 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.712892 | orchestrator | 14:02:14.712 STDOUT terraform:  } 2025-06-11 14:02:14.712938 | orchestrator | 14:02:14.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-11 14:02:14.712980 | orchestrator | 14:02:14.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.713018 | orchestrator | 14:02:14.712 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.713045 | orchestrator | 14:02:14.713 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.713082 | orchestrator | 14:02:14.713 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.713119 | orchestrator | 14:02:14.713 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.713158 | orchestrator | 14:02:14.713 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-11 14:02:14.713198 | orchestrator | 14:02:14.713 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.713223 | orchestrator | 14:02:14.713 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.713243 | orchestrator | 14:02:14.713 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.713268 | orchestrator | 14:02:14.713 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.713289 | orchestrator | 14:02:14.713 STDOUT terraform:  } 2025-06-11 14:02:14.713334 | orchestrator | 14:02:14.713 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-11 14:02:14.713377 | orchestrator | 14:02:14.713 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.713414 | orchestrator | 14:02:14.713 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.713440 | orchestrator | 14:02:14.713 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.713484 | orchestrator | 14:02:14.713 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.714212 | orchestrator | 14:02:14.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.714282 | orchestrator | 14:02:14.714 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-11 14:02:14.714339 | orchestrator | 14:02:14.714 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.714374 | orchestrator | 14:02:14.714 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.714421 | orchestrator | 14:02:14.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.714457 | orchestrator | 14:02:14.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.714482 | orchestrator | 14:02:14.714 STDOUT terraform:  } 2025-06-11 14:02:14.714542 | orchestrator | 14:02:14.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-11 14:02:14.714596 | orchestrator | 14:02:14.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.714650 | orchestrator | 14:02:14.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.714683 | orchestrator | 14:02:14.714 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.714729 | orchestrator | 14:02:14.714 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.714793 | orchestrator | 14:02:14.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.714843 | orchestrator | 14:02:14.714 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-11 14:02:14.714900 | orchestrator | 14:02:14.714 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.714934 | orchestrator | 14:02:14.714 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.714973 | orchestrator | 14:02:14.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.715007 | orchestrator | 14:02:14.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.715028 | orchestrator | 14:02:14.715 STDOUT terraform:  } 2025-06-11 14:02:14.715080 | orchestrator | 14:02:14.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-11 14:02:14.715132 | orchestrator | 14:02:14.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-11 14:02:14.715175 | orchestrator | 14:02:14.715 STDOUT terraform:  + attachment = (known after apply) 2025-06-11 14:02:14.715207 | orchestrator | 14:02:14.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.715252 | orchestrator | 14:02:14.715 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.715298 | orchestrator | 14:02:14.715 STDOUT terraform:  + metadata = (known after apply) 2025-06-11 14:02:14.715347 | orchestrator | 14:02:14.715 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-11 14:02:14.715391 | orchestrator | 14:02:14.715 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.715421 | orchestrator | 14:02:14.715 STDOUT terraform:  + size = 20 2025-06-11 14:02:14.715452 | orchestrator | 14:02:14.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-11 14:02:14.715485 | orchestrator | 14:02:14.715 STDOUT terraform:  + volume_type = "ssd" 2025-06-11 14:02:14.715536 | orchestrator | 14:02:14.715 STDOUT terraform:  } 2025-06-11 14:02:14.715598 | orchestrator | 14:02:14.715 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-11 14:02:14.715656 | orchestrator | 14:02:14.715 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-11 14:02:14.715701 | orchestrator | 14:02:14.715 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.715745 | orchestrator | 14:02:14.715 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.715790 | orchestrator | 14:02:14.715 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.715835 | orchestrator | 14:02:14.715 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.715899 | orchestrator | 14:02:14.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.715941 | orchestrator | 14:02:14.715 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.715988 | orchestrator | 14:02:14.715 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.716031 | orchestrator | 14:02:14.715 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.716069 | orchestrator | 14:02:14.716 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-11 14:02:14.716101 | orchestrator | 14:02:14.716 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.716145 | orchestrator | 14:02:14.716 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.716193 | orchestrator | 14:02:14.716 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.716236 | orchestrator | 14:02:14.716 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.716283 | orchestrator | 14:02:14.716 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.716317 | orchestrator | 14:02:14.716 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.716357 | orchestrator | 14:02:14.716 STDOUT terraform:  + name = "testbed-manager" 2025-06-11 14:02:14.716391 | orchestrator | 14:02:14.716 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.716434 | orchestrator | 14:02:14.716 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.716477 | orchestrator | 14:02:14.716 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.716508 | orchestrator | 14:02:14.716 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.716552 | orchestrator | 14:02:14.716 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.716596 | orchestrator | 14:02:14.716 STDOUT terraform:  + user_data = (known after apply) 2025-06-11 14:02:14.716621 | orchestrator | 14:02:14.716 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.716656 | orchestrator | 14:02:14.716 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.716694 | orchestrator | 14:02:14.716 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.716731 | orchestrator | 14:02:14.716 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.716768 | orchestrator | 14:02:14.716 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.716807 | orchestrator | 14:02:14.716 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.716867 | orchestrator | 14:02:14.716 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.716893 | orchestrator | 14:02:14.716 STDOUT terraform:  } 2025-06-11 14:02:14.716917 | orchestrator | 14:02:14.716 STDOUT terraform:  + network { 2025-06-11 14:02:14.716945 | orchestrator | 14:02:14.716 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.716985 | orchestrator | 14:02:14.716 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.717025 | orchestrator | 14:02:14.716 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.717068 | orchestrator | 14:02:14.717 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.717113 | orchestrator | 14:02:14.717 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.717153 | orchestrator | 14:02:14.717 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.717193 | orchestrator | 14:02:14.717 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.717215 | orchestrator | 14:02:14.717 STDOUT terraform:  } 2025-06-11 14:02:14.717237 | orchestrator | 14:02:14.717 STDOUT terraform:  } 2025-06-11 14:02:14.726083 | orchestrator | 14:02:14.718 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-11 14:02:14.726110 | orchestrator | 14:02:14.718 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726114 | orchestrator | 14:02:14.718 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726125 | orchestrator | 14:02:14.718 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726129 | orchestrator | 14:02:14.718 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726133 | orchestrator | 14:02:14.718 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726137 | orchestrator | 14:02:14.718 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726141 | orchestrator | 14:02:14.718 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.726145 | orchestrator | 14:02:14.718 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.726149 | orchestrator | 14:02:14.718 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.726152 | orchestrator | 14:02:14.718 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.726156 | orchestrator | 14:02:14.718 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.726160 | orchestrator | 14:02:14.718 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.726163 | orchestrator | 14:02:14.718 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.726167 | orchestrator | 14:02:14.718 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.726171 | orchestrator | 14:02:14.719 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.726174 | orchestrator | 14:02:14.719 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.726178 | orchestrator | 14:02:14.719 STDOUT terraform:  + name = "testbed-node-0" 2025-06-11 14:02:14.726182 | orchestrator | 14:02:14.719 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.726185 | orchestrator | 14:02:14.719 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.726189 | orchestrator | 14:02:14.719 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.726193 | orchestrator | 14:02:14.719 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.726196 | orchestrator | 14:02:14.719 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.726200 | orchestrator | 14:02:14.719 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.726204 | orchestrator | 14:02:14.719 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.726215 | orchestrator | 14:02:14.719 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.726219 | orchestrator | 14:02:14.719 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.726223 | orchestrator | 14:02:14.719 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.726226 | orchestrator | 14:02:14.719 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.726230 | orchestrator | 14:02:14.719 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.726234 | orchestrator | 14:02:14.719 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726237 | orchestrator | 14:02:14.719 STDOUT terraform:  } 2025-06-11 14:02:14.726241 | orchestrator | 14:02:14.719 STDOUT terraform:  + network { 2025-06-11 14:02:14.726245 | orchestrator | 14:02:14.719 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.726249 | orchestrator | 14:02:14.719 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.726252 | orchestrator | 14:02:14.719 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.726256 | orchestrator | 14:02:14.719 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.726265 | orchestrator | 14:02:14.719 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.726269 | orchestrator | 14:02:14.719 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.726273 | orchestrator | 14:02:14.719 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726277 | orchestrator | 14:02:14.719 STDOUT terraform:  } 2025-06-11 14:02:14.726280 | orchestrator | 14:02:14.719 STDOUT terraform:  } 2025-06-11 14:02:14.726284 | orchestrator | 14:02:14.719 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-11 14:02:14.726288 | orchestrator | 14:02:14.719 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726292 | orchestrator | 14:02:14.719 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726295 | orchestrator | 14:02:14.719 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726299 | orchestrator | 14:02:14.719 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726303 | orchestrator | 14:02:14.719 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726307 | orchestrator | 14:02:14.719 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726310 | orchestrator | 14:02:14.719 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.726314 | orchestrator | 14:02:14.719 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.726318 | orchestrator | 14:02:14.719 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.726321 | orchestrator | 14:02:14.719 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.726325 | orchestrator | 14:02:14.719 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.726331 | orchestrator | 14:02:14.719 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.726338 | orchestrator | 14:02:14.720 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.726342 | orchestrator | 14:02:14.720 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.726346 | orchestrator | 14:02:14.720 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.726349 | orchestrator | 14:02:14.720 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.726353 | orchestrator | 14:02:14.720 STDOUT terraform:  + name = "testbed-node-1" 2025-06-11 14:02:14.726357 | orchestrator | 14:02:14.720 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.726360 | orchestrator | 14:02:14.720 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.726364 | orchestrator | 14:02:14.720 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.726368 | orchestrator | 14:02:14.720 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.726371 | orchestrator | 14:02:14.720 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.726375 | orchestrator | 14:02:14.720 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.726379 | orchestrator | 14:02:14.720 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.726382 | orchestrator | 14:02:14.720 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.726388 | orchestrator | 14:02:14.720 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.726392 | orchestrator | 14:02:14.720 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.726395 | orchestrator | 14:02:14.720 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.726399 | orchestrator | 14:02:14.720 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.726403 | orchestrator | 14:02:14.720 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726407 | orchestrator | 14:02:14.720 STDOUT terraform:  } 2025-06-11 14:02:14.726410 | orchestrator | 14:02:14.720 STDOUT terraform:  + network { 2025-06-11 14:02:14.726417 | orchestrator | 14:02:14.720 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.726420 | orchestrator | 14:02:14.720 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.726424 | orchestrator | 14:02:14.720 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.726428 | orchestrator | 14:02:14.720 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.726431 | orchestrator | 14:02:14.720 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.726435 | orchestrator | 14:02:14.720 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.726439 | orchestrator | 14:02:14.720 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726442 | orchestrator | 14:02:14.720 STDOUT terraform:  } 2025-06-11 14:02:14.726446 | orchestrator | 14:02:14.720 STDOUT terraform:  } 2025-06-11 14:02:14.726450 | orchestrator | 14:02:14.720 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-11 14:02:14.726454 | orchestrator | 14:02:14.720 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726460 | orchestrator | 14:02:14.720 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726464 | orchestrator | 14:02:14.720 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726468 | orchestrator | 14:02:14.720 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726471 | orchestrator | 14:02:14.720 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726475 | orchestrator | 14:02:14.720 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726479 | orchestrator | 14:02:14.720 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.726482 | orchestrator | 14:02:14.720 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.726486 | orchestrator | 14:02:14.720 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.726490 | orchestrator | 14:02:14.720 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.726493 | orchestrator | 14:02:14.721 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.726497 | orchestrator | 14:02:14.721 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.726501 | orchestrator | 14:02:14.721 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.726504 | orchestrator | 14:02:14.721 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.726508 | orchestrator | 14:02:14.721 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.726512 | orchestrator | 14:02:14.721 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.726515 | orchestrator | 14:02:14.721 STDOUT terraform:  + name = "testbed-node-2" 2025-06-11 14:02:14.726519 | orchestrator | 14:02:14.721 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.726523 | orchestrator | 14:02:14.721 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.726526 | orchestrator | 14:02:14.721 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.726530 | orchestrator | 14:02:14.721 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.726533 | orchestrator | 14:02:14.721 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.726537 | orchestrator | 14:02:14.721 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.726541 | orchestrator | 14:02:14.721 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.726545 | orchestrator | 14:02:14.721 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.726548 | orchestrator | 14:02:14.721 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.726554 | orchestrator | 14:02:14.721 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.726558 | orchestrator | 14:02:14.721 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.726564 | orchestrator | 14:02:14.721 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.726568 | orchestrator | 14:02:14.721 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726574 | orchestrator | 14:02:14.721 STDOUT terraform:  } 2025-06-11 14:02:14.726578 | orchestrator | 14:02:14.721 STDOUT terraform:  + network { 2025-06-11 14:02:14.726582 | orchestrator | 14:02:14.721 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.726585 | orchestrator | 14:02:14.721 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.726589 | orchestrator | 14:02:14.721 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.726593 | orchestrator | 14:02:14.721 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.726596 | orchestrator | 14:02:14.721 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.726600 | orchestrator | 14:02:14.721 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.726604 | orchestrator | 14:02:14.721 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726607 | orchestrator | 14:02:14.721 STDOUT terraform:  } 2025-06-11 14:02:14.726613 | orchestrator | 14:02:14.721 STDOUT terraform:  } 2025-06-11 14:02:14.726616 | orchestrator | 14:02:14.721 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-11 14:02:14.726620 | orchestrator | 14:02:14.721 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726624 | orchestrator | 14:02:14.721 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726628 | orchestrator | 14:02:14.721 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726631 | orchestrator | 14:02:14.721 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726635 | orchestrator | 14:02:14.721 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726639 | orchestrator | 14:02:14.721 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726642 | orchestrator | 14:02:14.721 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.726646 | orchestrator | 14:02:14.721 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.726650 | orchestrator | 14:02:14.722 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.726653 | orchestrator | 14:02:14.722 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.726657 | orchestrator | 14:02:14.722 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.726661 | orchestrator | 14:02:14.722 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.726664 | orchestrator | 14:02:14.722 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.726668 | orchestrator | 14:02:14.722 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.726672 | orchestrator | 14:02:14.722 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.726675 | orchestrator | 14:02:14.722 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.726679 | orchestrator | 14:02:14.722 STDOUT terraform:  + name = "testbed-node-3" 2025-06-11 14:02:14.726683 | orchestrator | 14:02:14.722 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.726692 | orchestrator | 14:02:14.722 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.726696 | orchestrator | 14:02:14.722 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.726700 | orchestrator | 14:02:14.722 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.726703 | orchestrator | 14:02:14.722 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.726707 | orchestrator | 14:02:14.722 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.726711 | orchestrator | 14:02:14.722 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.726717 | orchestrator | 14:02:14.722 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.726721 | orchestrator | 14:02:14.722 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.726725 | orchestrator | 14:02:14.722 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.726728 | orchestrator | 14:02:14.722 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.726732 | orchestrator | 14:02:14.722 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.726735 | orchestrator | 14:02:14.722 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726739 | orchestrator | 14:02:14.722 STDOUT terraform:  } 2025-06-11 14:02:14.726743 | orchestrator | 14:02:14.722 STDOUT terraform:  + network { 2025-06-11 14:02:14.726746 | orchestrator | 14:02:14.722 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.726750 | orchestrator | 14:02:14.722 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.726754 | orchestrator | 14:02:14.722 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.726757 | orchestrator | 14:02:14.722 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.726761 | orchestrator | 14:02:14.722 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.726765 | orchestrator | 14:02:14.722 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.726768 | orchestrator | 14:02:14.722 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726772 | orchestrator | 14:02:14.722 STDOUT terraform:  } 2025-06-11 14:02:14.726776 | orchestrator | 14:02:14.722 STDOUT terraform:  } 2025-06-11 14:02:14.726779 | orchestrator | 14:02:14.722 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-11 14:02:14.726783 | orchestrator | 14:02:14.722 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726787 | orchestrator | 14:02:14.722 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726790 | orchestrator | 14:02:14.722 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726794 | orchestrator | 14:02:14.722 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726797 | orchestrator | 14:02:14.722 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726801 | orchestrator | 14:02:14.723 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726808 | orchestrator | 14:02:14.723 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.726814 | orchestrator | 14:02:14.723 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.726817 | orchestrator | 14:02:14.723 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.726821 | orchestrator | 14:02:14.723 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.726825 | orchestrator | 14:02:14.723 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.726829 | orchestrator | 14:02:14.723 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.726832 | orchestrator | 14:02:14.723 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.726836 | orchestrator | 14:02:14.723 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.726842 | orchestrator | 14:02:14.723 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.726846 | orchestrator | 14:02:14.723 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.726849 | orchestrator | 14:02:14.723 STDOUT terraform:  + name = "testbed-node-4" 2025-06-11 14:02:14.726877 | orchestrator | 14:02:14.723 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.726881 | orchestrator | 14:02:14.723 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.726885 | orchestrator | 14:02:14.723 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.726891 | orchestrator | 14:02:14.723 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.726895 | orchestrator | 14:02:14.723 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.726899 | orchestrator | 14:02:14.723 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.726902 | orchestrator | 14:02:14.723 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.726906 | orchestrator | 14:02:14.723 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.726910 | orchestrator | 14:02:14.723 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.726914 | orchestrator | 14:02:14.723 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.726917 | orchestrator | 14:02:14.723 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.726921 | orchestrator | 14:02:14.723 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.726925 | orchestrator | 14:02:14.723 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726928 | orchestrator | 14:02:14.723 STDOUT terraform:  } 2025-06-11 14:02:14.726932 | orchestrator | 14:02:14.723 STDOUT terraform:  + network { 2025-06-11 14:02:14.726936 | orchestrator | 14:02:14.723 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.726939 | orchestrator | 14:02:14.723 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.726943 | orchestrator | 14:02:14.723 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.726947 | orchestrator | 14:02:14.723 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.726953 | orchestrator | 14:02:14.723 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.726957 | orchestrator | 14:02:14.723 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.726961 | orchestrator | 14:02:14.723 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.726965 | orchestrator | 14:02:14.723 STDOUT terraform:  } 2025-06-11 14:02:14.726968 | orchestrator | 14:02:14.723 STDOUT terraform:  } 2025-06-11 14:02:14.726972 | orchestrator | 14:02:14.723 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-11 14:02:14.726976 | orchestrator | 14:02:14.723 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-11 14:02:14.726979 | orchestrator | 14:02:14.723 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-11 14:02:14.726983 | orchestrator | 14:02:14.723 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-11 14:02:14.726987 | orchestrator | 14:02:14.724 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-11 14:02:14.726991 | orchestrator | 14:02:14.724 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.726994 | orchestrator | 14:02:14.724 STDOUT terraform:  + availability_zone = "nova" 2025-06-11 14:02:14.726998 | orchestrator | 14:02:14.724 STDOUT terraform:  + config_drive = true 2025-06-11 14:02:14.727001 | orchestrator | 14:02:14.724 STDOUT terraform:  + created = (known after apply) 2025-06-11 14:02:14.727005 | orchestrator | 14:02:14.724 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-11 14:02:14.727009 | orchestrator | 14:02:14.724 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-11 14:02:14.727012 | orchestrator | 14:02:14.724 STDOUT terraform:  + force_delete = false 2025-06-11 14:02:14.727016 | orchestrator | 14:02:14.724 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-11 14:02:14.727020 | orchestrator | 14:02:14.724 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727023 | orchestrator | 14:02:14.724 STDOUT terraform:  + image_id = (known after apply) 2025-06-11 14:02:14.727027 | orchestrator | 14:02:14.724 STDOUT terraform:  + image_name = (known after apply) 2025-06-11 14:02:14.727031 | orchestrator | 14:02:14.724 STDOUT terraform:  + key_pair = "testbed" 2025-06-11 14:02:14.727036 | orchestrator | 14:02:14.724 STDOUT terraform:  + name = "testbed-node-5" 2025-06-11 14:02:14.727040 | orchestrator | 14:02:14.724 STDOUT terraform:  + power_state = "active" 2025-06-11 14:02:14.727043 | orchestrator | 14:02:14.724 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727047 | orchestrator | 14:02:14.724 STDOUT terraform:  + security_groups = (known after apply) 2025-06-11 14:02:14.727051 | orchestrator | 14:02:14.724 STDOUT terraform:  + stop_before_destroy = false 2025-06-11 14:02:14.727054 | orchestrator | 14:02:14.724 STDOUT terraform:  + updated = (known after apply) 2025-06-11 14:02:14.727058 | orchestrator | 14:02:14.724 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-11 14:02:14.727062 | orchestrator | 14:02:14.724 STDOUT terraform:  + block_device { 2025-06-11 14:02:14.727069 | orchestrator | 14:02:14.724 STDOUT terraform:  + boot_index = 0 2025-06-11 14:02:14.727072 | orchestrator | 14:02:14.724 STDOUT terraform:  + delete_on_termination = false 2025-06-11 14:02:14.727076 | orchestrator | 14:02:14.724 STDOUT terraform:  + destination_type = "volume" 2025-06-11 14:02:14.727080 | orchestrator | 14:02:14.724 STDOUT terraform:  + multiattach = false 2025-06-11 14:02:14.727083 | orchestrator | 14:02:14.724 STDOUT terraform:  + source_type = "volume" 2025-06-11 14:02:14.727087 | orchestrator | 14:02:14.724 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.727090 | orchestrator | 14:02:14.724 STDOUT terraform:  } 2025-06-11 14:02:14.727094 | orchestrator | 14:02:14.724 STDOUT terraform:  + network { 2025-06-11 14:02:14.727098 | orchestrator | 14:02:14.724 STDOUT terraform:  + access_network = false 2025-06-11 14:02:14.727102 | orchestrator | 14:02:14.724 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-11 14:02:14.727105 | orchestrator | 14:02:14.724 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-11 14:02:14.727109 | orchestrator | 14:02:14.724 STDOUT terraform:  + mac = (known after apply) 2025-06-11 14:02:14.727113 | orchestrator | 14:02:14.724 STDOUT terraform:  + name = (known after apply) 2025-06-11 14:02:14.727119 | orchestrator | 14:02:14.724 STDOUT terraform:  + port = (known after apply) 2025-06-11 14:02:14.727122 | orchestrator | 14:02:14.724 STDOUT terraform:  + uuid = (known after apply) 2025-06-11 14:02:14.727126 | orchestrator | 14:02:14.724 STDOUT terraform:  } 2025-06-11 14:02:14.727130 | orchestrator | 14:02:14.724 STDOUT terraform:  } 2025-06-11 14:02:14.727134 | orchestrator | 14:02:14.724 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-11 14:02:14.727137 | orchestrator | 14:02:14.724 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-11 14:02:14.727141 | orchestrator | 14:02:14.724 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-11 14:02:14.727145 | orchestrator | 14:02:14.724 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727148 | orchestrator | 14:02:14.725 STDOUT terraform:  + name = "testbed" 2025-06-11 14:02:14.727152 | orchestrator | 14:02:14.725 STDOUT terraform:  + private_key = (sensitive value) 2025-06-11 14:02:14.727156 | orchestrator | 14:02:14.725 STDOUT terraform:  + public_key = (known after apply) 2025-06-11 14:02:14.727159 | orchestrator | 14:02:14.725 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727166 | orchestrator | 14:02:14.725 STDOUT terraform:  + user_id = (known after apply) 2025-06-11 14:02:14.727169 | orchestrator | 14:02:14.725 STDOUT terraform:  } 2025-06-11 14:02:14.727173 | orchestrator | 14:02:14.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-11 14:02:14.727177 | orchestrator | 14:02:14.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.727181 | orchestrator | 14:02:14.725 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.727185 | orchestrator | 14:02:14.725 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727193 | orchestrator | 14:02:14.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.727197 | orchestrator | 14:02:14.725 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727201 | orchestrator | 14:02:14.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.727204 | orchestrator | 14:02:14.725 STDOUT terraform:  } 2025-06-11 14:02:14.727208 | orchestrator | 14:02:14.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-11 14:02:14.727212 | orchestrator | 14:02:14.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.727216 | orchestrator | 14:02:14.725 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.727219 | orchestrator | 14:02:14.725 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727223 | orchestrator | 14:02:14.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.727227 | orchestrator | 14:02:14.725 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727230 | orchestrator | 14:02:14.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.727234 | orchestrator | 14:02:14.725 STDOUT terraform:  } 2025-06-11 14:02:14.727238 | orchestrator | 14:02:14.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-11 14:02:14.727241 | orchestrator | 14:02:14.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.727245 | orchestrator | 14:02:14.725 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.727249 | orchestrator | 14:02:14.725 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727252 | orchestrator | 14:02:14.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.727256 | orchestrator | 14:02:14.725 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727260 | orchestrator | 14:02:14.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.727263 | orchestrator | 14:02:14.725 STDOUT terraform:  } 2025-06-11 14:02:14.727267 | orchestrator | 14:02:14.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-11 14:02:14.727271 | orchestrator | 14:02:14.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.727274 | orchestrator | 14:02:14.725 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.727278 | orchestrator | 14:02:14.725 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.727282 | orchestrator | 14:02:14.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.727285 | orchestrator | 14:02:14.725 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.727289 | orchestrator | 14:02:14.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.727292 | orchestrator | 14:02:14.725 STDOUT terraform:  } 2025-06-11 14:02:14.730528 | orchestrator | 14:02:14.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-11 14:02:14.732378 | orchestrator | 14:02:14.730 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.732442 | orchestrator | 14:02:14.732 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.732491 | orchestrator | 14:02:14.732 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.732543 | orchestrator | 14:02:14.732 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.732583 | orchestrator | 14:02:14.732 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.732632 | orchestrator | 14:02:14.732 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.732640 | orchestrator | 14:02:14.732 STDOUT terraform:  } 2025-06-11 14:02:14.732714 | orchestrator | 14:02:14.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-11 14:02:14.732779 | orchestrator | 14:02:14.732 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.732811 | orchestrator | 14:02:14.732 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.732843 | orchestrator | 14:02:14.732 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.732899 | orchestrator | 14:02:14.732 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.732941 | orchestrator | 14:02:14.732 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.732969 | orchestrator | 14:02:14.732 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.732990 | orchestrator | 14:02:14.732 STDOUT terraform:  } 2025-06-11 14:02:14.733054 | orchestrator | 14:02:14.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-11 14:02:14.733115 | orchestrator | 14:02:14.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.733143 | orchestrator | 14:02:14.733 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.733188 | orchestrator | 14:02:14.733 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.733216 | orchestrator | 14:02:14.733 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.733261 | orchestrator | 14:02:14.733 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.733288 | orchestrator | 14:02:14.733 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.733295 | orchestrator | 14:02:14.733 STDOUT terraform:  } 2025-06-11 14:02:14.733364 | orchestrator | 14:02:14.733 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-11 14:02:14.733425 | orchestrator | 14:02:14.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.733455 | orchestrator | 14:02:14.733 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.733498 | orchestrator | 14:02:14.733 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.733526 | orchestrator | 14:02:14.733 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.733555 | orchestrator | 14:02:14.733 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.733615 | orchestrator | 14:02:14.733 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.733622 | orchestrator | 14:02:14.733 STDOUT terraform:  } 2025-06-11 14:02:14.733678 | orchestrator | 14:02:14.733 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-11 14:02:14.733745 | orchestrator | 14:02:14.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-11 14:02:14.733782 | orchestrator | 14:02:14.733 STDOUT terraform:  + device = (known after apply) 2025-06-11 14:02:14.733808 | orchestrator | 14:02:14.733 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.733839 | orchestrator | 14:02:14.733 STDOUT terraform:  + instance_id = (known after apply) 2025-06-11 14:02:14.733882 | orchestrator | 14:02:14.733 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.733909 | orchestrator | 14:02:14.733 STDOUT terraform:  + volume_id = (known after apply) 2025-06-11 14:02:14.733916 | orchestrator | 14:02:14.733 STDOUT terraform:  } 2025-06-11 14:02:14.733994 | orchestrator | 14:02:14.733 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-11 14:02:14.734211 | orchestrator | 14:02:14.733 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-11 14:02:14.734241 | orchestrator | 14:02:14.734 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-11 14:02:14.734270 | orchestrator | 14:02:14.734 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-11 14:02:14.734304 | orchestrator | 14:02:14.734 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.734349 | orchestrator | 14:02:14.734 STDOUT terraform:  + port_id = (known after apply) 2025-06-11 14:02:14.734379 | orchestrator | 14:02:14.734 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.734388 | orchestrator | 14:02:14.734 STDOUT terraform:  } 2025-06-11 14:02:14.734439 | orchestrator | 14:02:14.734 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-11 14:02:14.734485 | orchestrator | 14:02:14.734 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-11 14:02:14.734511 | orchestrator | 14:02:14.734 STDOUT terraform:  + address = (known after apply) 2025-06-11 14:02:14.734539 | orchestrator | 14:02:14.734 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.734565 | orchestrator | 14:02:14.734 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-11 14:02:14.734592 | orchestrator | 14:02:14.734 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.734618 | orchestrator | 14:02:14.734 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-11 14:02:14.734643 | orchestrator | 14:02:14.734 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.734668 | orchestrator | 14:02:14.734 STDOUT terraform:  + pool = "public" 2025-06-11 14:02:14.734697 | orchestrator | 14:02:14.734 STDOUT terraform:  + port_id = (known after apply) 2025-06-11 14:02:14.734724 | orchestrator | 14:02:14.734 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.734749 | orchestrator | 14:02:14.734 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.734765 | orchestrator | 14:02:14.734 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.734784 | orchestrator | 14:02:14.734 STDOUT terraform:  } 2025-06-11 14:02:14.734828 | orchestrator | 14:02:14.734 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-11 14:02:14.734900 | orchestrator | 14:02:14.734 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-11 14:02:14.734927 | orchestrator | 14:02:14.734 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.734964 | orchestrator | 14:02:14.734 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.734987 | orchestrator | 14:02:14.734 STDOUT terraform:  + availability_zone_hints = [ 2025-06-11 14:02:14.734993 | orchestrator | 14:02:14.734 STDOUT terraform:  + "nova", 2025-06-11 14:02:14.735048 | orchestrator | 14:02:14.734 STDOUT terraform:  ] 2025-06-11 14:02:14.735089 | orchestrator | 14:02:14.735 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-11 14:02:14.735126 | orchestrator | 14:02:14.735 STDOUT terraform:  + external = (known after apply) 2025-06-11 14:02:14.735167 | orchestrator | 14:02:14.735 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.735207 | orchestrator | 14:02:14.735 STDOUT terraform:  + mtu = (known after apply) 2025-06-11 14:02:14.735248 | orchestrator | 14:02:14.735 STDOUT terraform:  + name = "net-testbed-management" 2025-06-11 14:02:14.735286 | orchestrator | 14:02:14.735 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.735322 | orchestrator | 14:02:14.735 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.735367 | orchestrator | 14:02:14.735 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.735409 | orchestrator | 14:02:14.735 STDOUT terraform:  + shared = (known after apply) 2025-06-11 14:02:14.735443 | orchestrator | 14:02:14.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.735480 | orchestrator | 14:02:14.735 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-11 14:02:14.735505 | orchestrator | 14:02:14.735 STDOUT terraform:  + segments (known after apply) 2025-06-11 14:02:14.735512 | orchestrator | 14:02:14.735 STDOUT terraform:  } 2025-06-11 14:02:14.735561 | orchestrator | 14:02:14.735 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-11 14:02:14.735609 | orchestrator | 14:02:14.735 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-11 14:02:14.735646 | orchestrator | 14:02:14.735 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.735681 | orchestrator | 14:02:14.735 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.735718 | orchestrator | 14:02:14.735 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.735755 | orchestrator | 14:02:14.735 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.735792 | orchestrator | 14:02:14.735 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.735830 | orchestrator | 14:02:14.735 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.735889 | orchestrator | 14:02:14.735 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.735927 | orchestrator | 14:02:14.735 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.735968 | orchestrator | 14:02:14.735 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.736006 | orchestrator | 14:02:14.735 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.736043 | orchestrator | 14:02:14.735 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.736080 | orchestrator | 14:02:14.736 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.736115 | orchestrator | 14:02:14.736 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.736152 | orchestrator | 14:02:14.736 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.736187 | orchestrator | 14:02:14.736 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.736225 | orchestrator | 14:02:14.736 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.736249 | orchestrator | 14:02:14.736 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.736278 | orchestrator | 14:02:14.736 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.736285 | orchestrator | 14:02:14.736 STDOUT terraform:  } 2025-06-11 14:02:14.736313 | orchestrator | 14:02:14.736 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.736342 | orchestrator | 14:02:14.736 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.736433 | orchestrator | 14:02:14.736 STDOUT terraform:  } 2025-06-11 14:02:14.736457 | orchestrator | 14:02:14.736 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.736464 | orchestrator | 14:02:14.736 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.736496 | orchestrator | 14:02:14.736 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-11 14:02:14.736530 | orchestrator | 14:02:14.736 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.736537 | orchestrator | 14:02:14.736 STDOUT terraform:  } 2025-06-11 14:02:14.736557 | orchestrator | 14:02:14.736 STDOUT terraform:  } 2025-06-11 14:02:14.736605 | orchestrator | 14:02:14.736 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-11 14:02:14.736649 | orchestrator | 14:02:14.736 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.736691 | orchestrator | 14:02:14.736 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.736723 | orchestrator | 14:02:14.736 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.736759 | orchestrator | 14:02:14.736 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.736796 | orchestrator | 14:02:14.736 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.736833 | orchestrator | 14:02:14.736 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.736897 | orchestrator | 14:02:14.736 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.736938 | orchestrator | 14:02:14.736 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.736976 | orchestrator | 14:02:14.736 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.737014 | orchestrator | 14:02:14.736 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.737056 | orchestrator | 14:02:14.737 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.737090 | orchestrator | 14:02:14.737 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.737125 | orchestrator | 14:02:14.737 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.737163 | orchestrator | 14:02:14.737 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.737201 | orchestrator | 14:02:14.737 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.737237 | orchestrator | 14:02:14.737 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.737275 | orchestrator | 14:02:14.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.737296 | orchestrator | 14:02:14.737 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.737326 | orchestrator | 14:02:14.737 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.737334 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737359 | orchestrator | 14:02:14.737 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.737392 | orchestrator | 14:02:14.737 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.737399 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737423 | orchestrator | 14:02:14.737 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.737452 | orchestrator | 14:02:14.737 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.737458 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737483 | orchestrator | 14:02:14.737 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.737513 | orchestrator | 14:02:14.737 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.737521 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737549 | orchestrator | 14:02:14.737 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.737556 | orchestrator | 14:02:14.737 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.737585 | orchestrator | 14:02:14.737 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-11 14:02:14.737614 | orchestrator | 14:02:14.737 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.737621 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737641 | orchestrator | 14:02:14.737 STDOUT terraform:  } 2025-06-11 14:02:14.737687 | orchestrator | 14:02:14.737 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-11 14:02:14.737732 | orchestrator | 14:02:14.737 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.737768 | orchestrator | 14:02:14.737 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.737804 | orchestrator | 14:02:14.737 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.737840 | orchestrator | 14:02:14.737 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.737902 | orchestrator | 14:02:14.737 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.737924 | orchestrator | 14:02:14.737 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.737966 | orchestrator | 14:02:14.737 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.738001 | orchestrator | 14:02:14.737 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.738115 | orchestrator | 14:02:14.737 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.738154 | orchestrator | 14:02:14.738 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.738202 | orchestrator | 14:02:14.738 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.738239 | orchestrator | 14:02:14.738 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.738275 | orchestrator | 14:02:14.738 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.738314 | orchestrator | 14:02:14.738 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.738352 | orchestrator | 14:02:14.738 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.738388 | orchestrator | 14:02:14.738 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.738423 | orchestrator | 14:02:14.738 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.738443 | orchestrator | 14:02:14.738 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.738473 | orchestrator | 14:02:14.738 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.738480 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738507 | orchestrator | 14:02:14.738 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.738535 | orchestrator | 14:02:14.738 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.738542 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738564 | orchestrator | 14:02:14.738 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.738595 | orchestrator | 14:02:14.738 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.738603 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738626 | orchestrator | 14:02:14.738 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.738654 | orchestrator | 14:02:14.738 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.738661 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738688 | orchestrator | 14:02:14.738 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.738697 | orchestrator | 14:02:14.738 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.738724 | orchestrator | 14:02:14.738 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-11 14:02:14.738753 | orchestrator | 14:02:14.738 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.738761 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738780 | orchestrator | 14:02:14.738 STDOUT terraform:  } 2025-06-11 14:02:14.738828 | orchestrator | 14:02:14.738 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-11 14:02:14.738883 | orchestrator | 14:02:14.738 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.738918 | orchestrator | 14:02:14.738 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.738954 | orchestrator | 14:02:14.738 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.738992 | orchestrator | 14:02:14.738 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.739043 | orchestrator | 14:02:14.738 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.739082 | orchestrator | 14:02:14.739 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.739117 | orchestrator | 14:02:14.739 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.739155 | orchestrator | 14:02:14.739 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.739191 | orchestrator | 14:02:14.739 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.739228 | orchestrator | 14:02:14.739 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.739267 | orchestrator | 14:02:14.739 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.739354 | orchestrator | 14:02:14.739 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.739390 | orchestrator | 14:02:14.739 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.739435 | orchestrator | 14:02:14.739 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.739473 | orchestrator | 14:02:14.739 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.739511 | orchestrator | 14:02:14.739 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.739547 | orchestrator | 14:02:14.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.739567 | orchestrator | 14:02:14.739 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.739598 | orchestrator | 14:02:14.739 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.739606 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739630 | orchestrator | 14:02:14.739 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.739660 | orchestrator | 14:02:14.739 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.739667 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739691 | orchestrator | 14:02:14.739 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.739721 | orchestrator | 14:02:14.739 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.739736 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739741 | orchestrator | 14:02:14.739 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.739777 | orchestrator | 14:02:14.739 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.739784 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739810 | orchestrator | 14:02:14.739 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.739817 | orchestrator | 14:02:14.739 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.739850 | orchestrator | 14:02:14.739 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-11 14:02:14.739892 | orchestrator | 14:02:14.739 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.739896 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739902 | orchestrator | 14:02:14.739 STDOUT terraform:  } 2025-06-11 14:02:14.739948 | orchestrator | 14:02:14.739 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-11 14:02:14.739994 | orchestrator | 14:02:14.739 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.740029 | orchestrator | 14:02:14.739 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.740065 | orchestrator | 14:02:14.740 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.740099 | orchestrator | 14:02:14.740 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.740140 | orchestrator | 14:02:14.740 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.740176 | orchestrator | 14:02:14.740 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.740212 | orchestrator | 14:02:14.740 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.740256 | orchestrator | 14:02:14.740 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.740288 | orchestrator | 14:02:14.740 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.740327 | orchestrator | 14:02:14.740 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.740363 | orchestrator | 14:02:14.740 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.740400 | orchestrator | 14:02:14.740 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.740437 | orchestrator | 14:02:14.740 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.740473 | orchestrator | 14:02:14.740 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.740683 | orchestrator | 14:02:14.740 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.740729 | orchestrator | 14:02:14.740 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.740790 | orchestrator | 14:02:14.740 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.740812 | orchestrator | 14:02:14.740 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.740845 | orchestrator | 14:02:14.740 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.740874 | orchestrator | 14:02:14.740 STDOUT terraform:  } 2025-06-11 14:02:14.740911 | orchestrator | 14:02:14.740 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.740942 | orchestrator | 14:02:14.740 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.740950 | orchestrator | 14:02:14.740 STDOUT terraform:  } 2025-06-11 14:02:14.740977 | orchestrator | 14:02:14.740 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.741007 | orchestrator | 14:02:14.740 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.741015 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.741042 | orchestrator | 14:02:14.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.741072 | orchestrator | 14:02:14.741 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.741079 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.741103 | orchestrator | 14:02:14.741 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.741112 | orchestrator | 14:02:14.741 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.741141 | orchestrator | 14:02:14.741 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-11 14:02:14.741171 | orchestrator | 14:02:14.741 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.741180 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.741186 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.741239 | orchestrator | 14:02:14.741 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-11 14:02:14.741283 | orchestrator | 14:02:14.741 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.741320 | orchestrator | 14:02:14.741 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.741367 | orchestrator | 14:02:14.741 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.741402 | orchestrator | 14:02:14.741 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.741439 | orchestrator | 14:02:14.741 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.741478 | orchestrator | 14:02:14.741 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.741513 | orchestrator | 14:02:14.741 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.741549 | orchestrator | 14:02:14.741 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.741585 | orchestrator | 14:02:14.741 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.741624 | orchestrator | 14:02:14.741 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.741674 | orchestrator | 14:02:14.741 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.741705 | orchestrator | 14:02:14.741 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.741744 | orchestrator | 14:02:14.741 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.741778 | orchestrator | 14:02:14.741 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.741815 | orchestrator | 14:02:14.741 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.741850 | orchestrator | 14:02:14.741 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.741900 | orchestrator | 14:02:14.741 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.741908 | orchestrator | 14:02:14.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.741945 | orchestrator | 14:02:14.741 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.741953 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.741975 | orchestrator | 14:02:14.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.742006 | orchestrator | 14:02:14.741 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.742038 | orchestrator | 14:02:14.741 STDOUT terraform:  } 2025-06-11 14:02:14.742206 | orchestrator | 14:02:14.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.742235 | orchestrator | 14:02:14.742 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.742243 | orchestrator | 14:02:14.742 STDOUT terraform:  } 2025-06-11 14:02:14.742267 | orchestrator | 14:02:14.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.742296 | orchestrator | 14:02:14.742 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.742303 | orchestrator | 14:02:14.742 STDOUT terraform:  } 2025-06-11 14:02:14.742331 | orchestrator | 14:02:14.742 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.742339 | orchestrator | 14:02:14.742 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.742372 | orchestrator | 14:02:14.742 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-11 14:02:14.742403 | orchestrator | 14:02:14.742 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.742412 | orchestrator | 14:02:14.742 STDOUT terraform:  } 2025-06-11 14:02:14.742418 | orchestrator | 14:02:14.742 STDOUT terraform:  } 2025-06-11 14:02:14.742469 | orchestrator | 14:02:14.742 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-11 14:02:14.742515 | orchestrator | 14:02:14.742 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-11 14:02:14.742552 | orchestrator | 14:02:14.742 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.742588 | orchestrator | 14:02:14.742 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-11 14:02:14.742624 | orchestrator | 14:02:14.742 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-11 14:02:14.742661 | orchestrator | 14:02:14.742 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.742697 | orchestrator | 14:02:14.742 STDOUT terraform:  + device_id = (known after apply) 2025-06-11 14:02:14.742734 | orchestrator | 14:02:14.742 STDOUT terraform:  + device_owner = (known after apply) 2025-06-11 14:02:14.742770 | orchestrator | 14:02:14.742 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-11 14:02:14.742810 | orchestrator | 14:02:14.742 STDOUT terraform:  + dns_name = (known after apply) 2025-06-11 14:02:14.742846 | orchestrator | 14:02:14.742 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.742895 | orchestrator | 14:02:14.742 STDOUT terraform:  + mac_address = (known after apply) 2025-06-11 14:02:14.742930 | orchestrator | 14:02:14.742 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.742966 | orchestrator | 14:02:14.742 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-11 14:02:14.743001 | orchestrator | 14:02:14.742 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-11 14:02:14.743038 | orchestrator | 14:02:14.742 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.743074 | orchestrator | 14:02:14.743 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-11 14:02:14.743112 | orchestrator | 14:02:14.743 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.743133 | orchestrator | 14:02:14.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.743162 | orchestrator | 14:02:14.743 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-11 14:02:14.743169 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743193 | orchestrator | 14:02:14.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.743222 | orchestrator | 14:02:14.743 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-11 14:02:14.743229 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743253 | orchestrator | 14:02:14.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.743282 | orchestrator | 14:02:14.743 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-11 14:02:14.743290 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743312 | orchestrator | 14:02:14.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-11 14:02:14.743341 | orchestrator | 14:02:14.743 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-11 14:02:14.743348 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743375 | orchestrator | 14:02:14.743 STDOUT terraform:  + binding (known after apply) 2025-06-11 14:02:14.743383 | orchestrator | 14:02:14.743 STDOUT terraform:  + fixed_ip { 2025-06-11 14:02:14.743411 | orchestrator | 14:02:14.743 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-11 14:02:14.743441 | orchestrator | 14:02:14.743 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.743449 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743454 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743508 | orchestrator | 14:02:14.743 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-11 14:02:14.743556 | orchestrator | 14:02:14.743 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-11 14:02:14.743578 | orchestrator | 14:02:14.743 STDOUT terraform:  + force_destroy = false 2025-06-11 14:02:14.743607 | orchestrator | 14:02:14.743 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.743672 | orchestrator | 14:02:14.743 STDOUT terraform:  + port_id = (known after apply) 2025-06-11 14:02:14.743701 | orchestrator | 14:02:14.743 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.743730 | orchestrator | 14:02:14.743 STDOUT terraform:  + router_id = (known after apply) 2025-06-11 14:02:14.743762 | orchestrator | 14:02:14.743 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-11 14:02:14.743770 | orchestrator | 14:02:14.743 STDOUT terraform:  } 2025-06-11 14:02:14.743809 | orchestrator | 14:02:14.743 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-11 14:02:14.743843 | orchestrator | 14:02:14.743 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-11 14:02:14.743889 | orchestrator | 14:02:14.743 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-11 14:02:14.743926 | orchestrator | 14:02:14.743 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.743952 | orchestrator | 14:02:14.743 STDOUT terraform:  + availability_zone_hints = [ 2025-06-11 14:02:14.743962 | orchestrator | 14:02:14.743 STDOUT terraform:  + "nova", 2025-06-11 14:02:14.743968 | orchestrator | 14:02:14.743 STDOUT terraform:  ] 2025-06-11 14:02:14.744012 | orchestrator | 14:02:14.743 STDOUT terraform:  + distributed = (known after apply) 2025-06-11 14:02:14.744048 | orchestrator | 14:02:14.744 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-11 14:02:14.744100 | orchestrator | 14:02:14.744 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-11 14:02:14.744178 | orchestrator | 14:02:14.744 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-11 14:02:14.744242 | orchestrator | 14:02:14.744 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.744273 | orchestrator | 14:02:14.744 STDOUT terraform:  + name = "testbed" 2025-06-11 14:02:14.744313 | orchestrator | 14:02:14.744 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.744356 | orchestrator | 14:02:14.744 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.744386 | orchestrator | 14:02:14.744 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-11 14:02:14.744393 | orchestrator | 14:02:14.744 STDOUT terraform:  } 2025-06-11 14:02:14.744451 | orchestrator | 14:02:14.744 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-11 14:02:14.744507 | orchestrator | 14:02:14.744 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-11 14:02:14.744532 | orchestrator | 14:02:14.744 STDOUT terraform:  + description = "ssh" 2025-06-11 14:02:14.744563 | orchestrator | 14:02:14.744 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.744589 | orchestrator | 14:02:14.744 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.744630 | orchestrator | 14:02:14.744 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.744654 | orchestrator | 14:02:14.744 STDOUT terraform:  + port_range_max = 22 2025-06-11 14:02:14.744678 | orchestrator | 14:02:14.744 STDOUT terraform:  + port_range_min = 22 2025-06-11 14:02:14.744706 | orchestrator | 14:02:14.744 STDOUT terraform:  + protocol = "tcp" 2025-06-11 14:02:14.744744 | orchestrator | 14:02:14.744 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.744779 | orchestrator | 14:02:14.744 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.744815 | orchestrator | 14:02:14.744 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.744845 | orchestrator | 14:02:14.744 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.744907 | orchestrator | 14:02:14.744 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.744944 | orchestrator | 14:02:14.744 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.744951 | orchestrator | 14:02:14.744 STDOUT terraform:  } 2025-06-11 14:02:14.745007 | orchestrator | 14:02:14.744 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-11 14:02:14.745061 | orchestrator | 14:02:14.745 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-11 14:02:14.745093 | orchestrator | 14:02:14.745 STDOUT terraform:  + description = "wireguard" 2025-06-11 14:02:14.745125 | orchestrator | 14:02:14.745 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.745151 | orchestrator | 14:02:14.745 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.745189 | orchestrator | 14:02:14.745 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.745215 | orchestrator | 14:02:14.745 STDOUT terraform:  + port_range_max = 51820 2025-06-11 14:02:14.745240 | orchestrator | 14:02:14.745 STDOUT terraform:  + port_range_min = 51820 2025-06-11 14:02:14.745267 | orchestrator | 14:02:14.745 STDOUT terraform:  + protocol = "udp" 2025-06-11 14:02:14.745303 | orchestrator | 14:02:14.745 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.745339 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.745377 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.745406 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.745448 | orchestrator | 14:02:14.745 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.745481 | orchestrator | 14:02:14.745 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.745489 | orchestrator | 14:02:14.745 STDOUT terraform:  } 2025-06-11 14:02:14.745587 | orchestrator | 14:02:14.745 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-11 14:02:14.745595 | orchestrator | 14:02:14.745 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-11 14:02:14.745635 | orchestrator | 14:02:14.745 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.745643 | orchestrator | 14:02:14.745 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.745685 | orchestrator | 14:02:14.745 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.745712 | orchestrator | 14:02:14.745 STDOUT terraform:  + protocol = "tcp" 2025-06-11 14:02:14.745748 | orchestrator | 14:02:14.745 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.745784 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.745819 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.745883 | orchestrator | 14:02:14.745 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-11 14:02:14.745893 | orchestrator | 14:02:14.745 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.745935 | orchestrator | 14:02:14.745 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.745943 | orchestrator | 14:02:14.745 STDOUT terraform:  } 2025-06-11 14:02:14.745998 | orchestrator | 14:02:14.745 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-11 14:02:14.746096 | orchestrator | 14:02:14.745 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-11 14:02:14.746125 | orchestrator | 14:02:14.746 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.746161 | orchestrator | 14:02:14.746 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.746202 | orchestrator | 14:02:14.746 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.746229 | orchestrator | 14:02:14.746 STDOUT terraform:  + protocol = "udp" 2025-06-11 14:02:14.746267 | orchestrator | 14:02:14.746 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.746303 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.746341 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.746377 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-11 14:02:14.746413 | orchestrator | 14:02:14.746 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.746451 | orchestrator | 14:02:14.746 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.746458 | orchestrator | 14:02:14.746 STDOUT terraform:  } 2025-06-11 14:02:14.746511 | orchestrator | 14:02:14.746 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-11 14:02:14.746565 | orchestrator | 14:02:14.746 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-11 14:02:14.746597 | orchestrator | 14:02:14.746 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.746622 | orchestrator | 14:02:14.746 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.746661 | orchestrator | 14:02:14.746 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.746689 | orchestrator | 14:02:14.746 STDOUT terraform:  + protocol = "icmp" 2025-06-11 14:02:14.746731 | orchestrator | 14:02:14.746 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.746764 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.746801 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.746833 | orchestrator | 14:02:14.746 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.746889 | orchestrator | 14:02:14.746 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.746925 | orchestrator | 14:02:14.746 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.746933 | orchestrator | 14:02:14.746 STDOUT terraform:  } 2025-06-11 14:02:14.746987 | orchestrator | 14:02:14.746 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-11 14:02:14.747038 | orchestrator | 14:02:14.746 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-11 14:02:14.747070 | orchestrator | 14:02:14.747 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.747096 | orchestrator | 14:02:14.747 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.747135 | orchestrator | 14:02:14.747 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.747160 | orchestrator | 14:02:14.747 STDOUT terraform:  + protocol = "tcp" 2025-06-11 14:02:14.747200 | orchestrator | 14:02:14.747 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.747236 | orchestrator | 14:02:14.747 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.747272 | orchestrator | 14:02:14.747 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.747303 | orchestrator | 14:02:14.747 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.747340 | orchestrator | 14:02:14.747 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.747377 | orchestrator | 14:02:14.747 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.747385 | orchestrator | 14:02:14.747 STDOUT terraform:  } 2025-06-11 14:02:14.747444 | orchestrator | 14:02:14.747 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-11 14:02:14.747493 | orchestrator | 14:02:14.747 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-11 14:02:14.747523 | orchestrator | 14:02:14.747 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.747550 | orchestrator | 14:02:14.747 STDOUT terraform:  + ethertype = "IPv4 2025-06-11 14:02:14.748347 | orchestrator | 14:02:14.748 STDOUT terraform: " 2025-06-11 14:02:14.748384 | orchestrator | 14:02:14.748 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.748412 | orchestrator | 14:02:14.748 STDOUT terraform:  + protocol = "udp" 2025-06-11 14:02:14.748453 | orchestrator | 14:02:14.748 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.748492 | orchestrator | 14:02:14.748 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.748531 | orchestrator | 14:02:14.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.748562 | orchestrator | 14:02:14.748 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.748599 | orchestrator | 14:02:14.748 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.748638 | orchestrator | 14:02:14.748 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.748645 | orchestrator | 14:02:14.748 STDOUT terraform:  } 2025-06-11 14:02:14.748701 | orchestrator | 14:02:14.748 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-11 14:02:14.748752 | orchestrator | 14:02:14.748 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-11 14:02:14.748785 | orchestrator | 14:02:14.748 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.748812 | orchestrator | 14:02:14.748 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.748851 | orchestrator | 14:02:14.748 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.748905 | orchestrator | 14:02:14.748 STDOUT terraform:  + protocol = "icmp" 2025-06-11 14:02:14.748945 | orchestrator | 14:02:14.748 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.748981 | orchestrator | 14:02:14.748 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.749018 | orchestrator | 14:02:14.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.749049 | orchestrator | 14:02:14.749 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.749086 | orchestrator | 14:02:14.749 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.749124 | orchestrator | 14:02:14.749 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.749132 | orchestrator | 14:02:14.749 STDOUT terraform:  } 2025-06-11 14:02:14.749184 | orchestrator | 14:02:14.749 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-11 14:02:14.749233 | orchestrator | 14:02:14.749 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-11 14:02:14.749261 | orchestrator | 14:02:14.749 STDOUT terraform:  + description = "vrrp" 2025-06-11 14:02:14.749291 | orchestrator | 14:02:14.749 STDOUT terraform:  + direction = "ingress" 2025-06-11 14:02:14.749340 | orchestrator | 14:02:14.749 STDOUT terraform:  + ethertype = "IPv4" 2025-06-11 14:02:14.749347 | orchestrator | 14:02:14.749 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.749375 | orchestrator | 14:02:14.749 STDOUT terraform:  + protocol = "112" 2025-06-11 14:02:14.749413 | orchestrator | 14:02:14.749 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.749449 | orchestrator | 14:02:14.749 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-11 14:02:14.749486 | orchestrator | 14:02:14.749 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-11 14:02:14.749516 | orchestrator | 14:02:14.749 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-11 14:02:14.749553 | orchestrator | 14:02:14.749 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-11 14:02:14.749591 | orchestrator | 14:02:14.749 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.749599 | orchestrator | 14:02:14.749 STDOUT terraform:  } 2025-06-11 14:02:14.749649 | orchestrator | 14:02:14.749 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-11 14:02:14.749699 | orchestrator | 14:02:14.749 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-11 14:02:14.749728 | orchestrator | 14:02:14.749 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.749786 | orchestrator | 14:02:14.749 STDOUT terraform:  + description = "management security group" 2025-06-11 14:02:14.749793 | orchestrator | 14:02:14.749 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.749815 | orchestrator | 14:02:14.749 STDOUT terraform:  + name = "testbed-management" 2025-06-11 14:02:14.749846 | orchestrator | 14:02:14.749 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.749884 | orchestrator | 14:02:14.749 STDOUT terraform:  + stateful = (known after apply) 2025-06-11 14:02:14.749912 | orchestrator | 14:02:14.749 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.749920 | orchestrator | 14:02:14.749 STDOUT terraform:  } 2025-06-11 14:02:14.750418 | orchestrator | 14:02:14.749 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-11 14:02:14.750479 | orchestrator | 14:02:14.750 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-11 14:02:14.750516 | orchestrator | 14:02:14.750 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.750547 | orchestrator | 14:02:14.750 STDOUT terraform:  + description = "node security group" 2025-06-11 14:02:14.750583 | orchestrator | 14:02:14.750 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.750611 | orchestrator | 14:02:14.750 STDOUT terraform:  + name = "testbed-node" 2025-06-11 14:02:14.750644 | orchestrator | 14:02:14.750 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.750676 | orchestrator | 14:02:14.750 STDOUT terraform:  + stateful = (known after apply) 2025-06-11 14:02:14.750706 | orchestrator | 14:02:14.750 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.750727 | orchestrator | 14:02:14.750 STDOUT terraform:  } 2025-06-11 14:02:14.750775 | orchestrator | 14:02:14.750 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-11 14:02:14.750823 | orchestrator | 14:02:14.750 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-11 14:02:14.750888 | orchestrator | 14:02:14.750 STDOUT terraform:  + all_tags = (known after apply) 2025-06-11 14:02:14.750898 | orchestrator | 14:02:14.750 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-11 14:02:14.750924 | orchestrator | 14:02:14.750 STDOUT terraform:  + dns_nameservers = [ 2025-06-11 14:02:14.750946 | orchestrator | 14:02:14.750 STDOUT terraform:  + "8.8.8.8", 2025-06-11 14:02:14.750960 | orchestrator | 14:02:14.750 STDOUT terraform:  + "9.9.9.9", 2025-06-11 14:02:14.750981 | orchestrator | 14:02:14.750 STDOUT terraform:  ] 2025-06-11 14:02:14.751004 | orchestrator | 14:02:14.750 STDOUT terraform:  + enable_dhcp = true 2025-06-11 14:02:14.751038 | orchestrator | 14:02:14.750 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-11 14:02:14.751074 | orchestrator | 14:02:14.751 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.751096 | orchestrator | 14:02:14.751 STDOUT terraform:  + ip_version = 4 2025-06-11 14:02:14.751131 | orchestrator | 14:02:14.751 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-11 14:02:14.751164 | orchestrator | 14:02:14.751 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-11 14:02:14.751207 | orchestrator | 14:02:14.751 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-11 14:02:14.751239 | orchestrator | 14:02:14.751 STDOUT terraform:  + network_id = (known after apply) 2025-06-11 14:02:14.751265 | orchestrator | 14:02:14.751 STDOUT terraform:  + no_gateway = false 2025-06-11 14:02:14.751296 | orchestrator | 14:02:14.751 STDOUT terraform:  + region = (known after apply) 2025-06-11 14:02:14.751333 | orchestrator | 14:02:14.751 STDOUT terraform:  + service_types = (known after apply) 2025-06-11 14:02:14.751366 | orchestrator | 14:02:14.751 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-11 14:02:14.751395 | orchestrator | 14:02:14.751 STDOUT terraform:  + allocation_pool { 2025-06-11 14:02:14.751420 | orchestrator | 14:02:14.751 STDOUT terraform:  + end = "192.168.31.250" 2025-06-11 14:02:14.751450 | orchestrator | 14:02:14.751 STDOUT terraform:  + start = "192.168.31.200" 2025-06-11 14:02:14.751458 | orchestrator | 14:02:14.751 STDOUT terraform:  } 2025-06-11 14:02:14.751478 | orchestrator | 14:02:14.751 STDOUT terraform:  } 2025-06-11 14:02:14.751507 | orchestrator | 14:02:14.751 STDOUT terraform:  # terraform_data.image will be created 2025-06-11 14:02:14.751535 | orchestrator | 14:02:14.751 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-11 14:02:14.751564 | orchestrator | 14:02:14.751 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.751588 | orchestrator | 14:02:14.751 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-11 14:02:14.751617 | orchestrator | 14:02:14.751 STDOUT terraform:  + output = (known after apply) 2025-06-11 14:02:14.751624 | orchestrator | 14:02:14.751 STDOUT terraform:  } 2025-06-11 14:02:14.751665 | orchestrator | 14:02:14.751 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-11 14:02:14.751694 | orchestrator | 14:02:14.751 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-11 14:02:14.751720 | orchestrator | 14:02:14.751 STDOUT terraform:  + id = (known after apply) 2025-06-11 14:02:14.751746 | orchestrator | 14:02:14.751 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-11 14:02:14.751843 | orchestrator | 14:02:14.751 STDOUT terraform:  + output = (known after apply) 2025-06-11 14:02:14.751915 | orchestrator | 14:02:14.751 STDOUT terraform:  } 2025-06-11 14:02:14.751956 | orchestrator | 14:02:14.751 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-11 14:02:14.751984 | orchestrator | 14:02:14.751 STDOUT terraform: Changes to Outputs: 2025-06-11 14:02:14.752015 | orchestrator | 14:02:14.751 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-11 14:02:14.752048 | orchestrator | 14:02:14.752 STDOUT terraform:  + private_key = (sensitive value) 2025-06-11 14:02:14.814084 | orchestrator | 14:02:14.812 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-11 14:02:14.814149 | orchestrator | 14:02:14.813 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=18f7a685-0a97-797e-60ad-096cdbae19ee] 2025-06-11 14:02:14.962093 | orchestrator | 14:02:14.960 STDOUT terraform: terraform_data.image: Creating... 2025-06-11 14:02:14.963296 | orchestrator | 14:02:14.962 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=70cd8e4b-e06f-f287-6303-8b05533fc23c] 2025-06-11 14:02:14.987122 | orchestrator | 14:02:14.986 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-11 14:02:14.987175 | orchestrator | 14:02:14.986 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-11 14:02:15.001068 | orchestrator | 14:02:14.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-11 14:02:15.006264 | orchestrator | 14:02:15.003 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-11 14:02:15.009649 | orchestrator | 14:02:15.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-11 14:02:15.011799 | orchestrator | 14:02:15.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-11 14:02:15.012674 | orchestrator | 14:02:15.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-11 14:02:15.013730 | orchestrator | 14:02:15.013 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-11 14:02:15.015285 | orchestrator | 14:02:15.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-11 14:02:15.015923 | orchestrator | 14:02:15.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-11 14:02:15.547626 | orchestrator | 14:02:15.547 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-11 14:02:15.553937 | orchestrator | 14:02:15.550 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-11 14:02:15.555229 | orchestrator | 14:02:15.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-11 14:02:15.555480 | orchestrator | 14:02:15.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-11 14:02:15.572489 | orchestrator | 14:02:15.571 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-11 14:02:15.576444 | orchestrator | 14:02:15.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-11 14:02:21.131331 | orchestrator | 14:02:21.130 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=5f71d8e3-31ae-472a-b57d-3f3201bcf6ae] 2025-06-11 14:02:21.140981 | orchestrator | 14:02:21.140 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-11 14:02:25.004421 | orchestrator | 14:02:25.003 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-11 14:02:25.014111 | orchestrator | 14:02:25.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-11 14:02:25.016324 | orchestrator | 14:02:25.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-11 14:02:25.016409 | orchestrator | 14:02:25.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-11 14:02:25.016726 | orchestrator | 14:02:25.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-11 14:02:25.018521 | orchestrator | 14:02:25.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-11 14:02:25.556901 | orchestrator | 14:02:25.556 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-11 14:02:25.557006 | orchestrator | 14:02:25.556 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-11 14:02:25.577958 | orchestrator | 14:02:25.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-11 14:02:25.625233 | orchestrator | 14:02:25.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=f26631de-4d53-47c9-822c-cbb2033e0b86] 2025-06-11 14:02:25.625306 | orchestrator | 14:02:25.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=e952eadf-b7fa-49e6-b121-e808f2d1456b] 2025-06-11 14:02:25.632887 | orchestrator | 14:02:25.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-11 14:02:25.633339 | orchestrator | 14:02:25.633 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-11 14:02:25.637473 | orchestrator | 14:02:25.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=98e4ef65-326b-406b-8d68-9bbb471a6ffc] 2025-06-11 14:02:25.641943 | orchestrator | 14:02:25.641 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=7ece5900bbd43f5fd31faec09f1df759b3e37729] 2025-06-11 14:02:25.643654 | orchestrator | 14:02:25.643 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-11 14:02:25.654880 | orchestrator | 14:02:25.654 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=997790a1-2284-4ae8-ae59-5b744e390299] 2025-06-11 14:02:25.657697 | orchestrator | 14:02:25.657 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-11 14:02:25.659012 | orchestrator | 14:02:25.658 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-11 14:02:25.669497 | orchestrator | 14:02:25.669 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=75267c96-c7d6-45ef-a5a6-94b8e66fe961] 2025-06-11 14:02:25.676133 | orchestrator | 14:02:25.676 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-11 14:02:25.680361 | orchestrator | 14:02:25.680 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b941b6c64e14c7f9de869cfa8c632bcc2b4ba167] 2025-06-11 14:02:25.681501 | orchestrator | 14:02:25.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=1d2dd3c0-811b-40b4-99af-5946e13dbfd3] 2025-06-11 14:02:25.685525 | orchestrator | 14:02:25.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-11 14:02:25.688018 | orchestrator | 14:02:25.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-11 14:02:25.764686 | orchestrator | 14:02:25.764 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=df292424-6e82-4e61-a52c-dd60099c8b3b] 2025-06-11 14:02:25.772629 | orchestrator | 14:02:25.772 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-11 14:02:25.792159 | orchestrator | 14:02:25.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=5fa61c96-5ca4-4fa7-9393-6e2780ce67d9] 2025-06-11 14:02:25.794500 | orchestrator | 14:02:25.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=0531c1ed-639b-4ab3-bbe7-14f10d387a86] 2025-06-11 14:02:31.144064 | orchestrator | 14:02:31.143 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-11 14:02:31.445433 | orchestrator | 14:02:31.445 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=bca7a191-125c-4247-b6ce-7dd8546876b1] 2025-06-11 14:02:31.915737 | orchestrator | 14:02:31.915 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=57901946-a2fc-4549-bdc1-0947c1f7abba] 2025-06-11 14:02:31.923228 | orchestrator | 14:02:31.923 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-11 14:02:35.631948 | orchestrator | 14:02:35.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-11 14:02:35.644645 | orchestrator | 14:02:35.644 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-11 14:02:35.659020 | orchestrator | 14:02:35.658 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-11 14:02:35.660272 | orchestrator | 14:02:35.659 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-11 14:02:35.686633 | orchestrator | 14:02:35.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-11 14:02:35.688769 | orchestrator | 14:02:35.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-11 14:02:35.970952 | orchestrator | 14:02:35.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=947aec13-e8a1-49a8-a984-efdbf69cffa9] 2025-06-11 14:02:36.016088 | orchestrator | 14:02:36.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=654660fe-f50d-4b40-a68e-7b359b072d1b] 2025-06-11 14:02:36.048769 | orchestrator | 14:02:36.048 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=b0c481cc-e968-4619-84fd-240890fb97cb] 2025-06-11 14:02:36.057259 | orchestrator | 14:02:36.056 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b] 2025-06-11 14:02:36.075559 | orchestrator | 14:02:36.075 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=c820f619-9360-49d1-97de-f4f9700f6b29] 2025-06-11 14:02:36.100797 | orchestrator | 14:02:36.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=7d16ba3d-3882-40bf-a888-e0945c42bfad] 2025-06-11 14:02:39.805261 | orchestrator | 14:02:39.804 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=12a6f229-708e-4dbd-9e14-c7a4c0155fa5] 2025-06-11 14:02:39.811275 | orchestrator | 14:02:39.811 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-11 14:02:39.813983 | orchestrator | 14:02:39.813 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-11 14:02:39.817057 | orchestrator | 14:02:39.816 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-11 14:02:40.023812 | orchestrator | 14:02:40.023 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9fe1316e-cab0-45ac-a04e-f20b428ba167] 2025-06-11 14:02:40.036458 | orchestrator | 14:02:40.036 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-11 14:02:40.039263 | orchestrator | 14:02:40.039 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-11 14:02:40.040092 | orchestrator | 14:02:40.039 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-11 14:02:40.042423 | orchestrator | 14:02:40.042 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-11 14:02:40.044851 | orchestrator | 14:02:40.044 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-11 14:02:40.045211 | orchestrator | 14:02:40.045 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-11 14:02:40.057107 | orchestrator | 14:02:40.056 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=faedf13f-c294-4a6e-9269-a45f68ff7430] 2025-06-11 14:02:40.062130 | orchestrator | 14:02:40.061 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-11 14:02:40.062626 | orchestrator | 14:02:40.062 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-11 14:02:40.063263 | orchestrator | 14:02:40.063 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-11 14:02:40.185380 | orchestrator | 14:02:40.184 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a8a46305-133c-4071-983f-13d547be03a6] 2025-06-11 14:02:40.192601 | orchestrator | 14:02:40.192 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-11 14:02:40.201451 | orchestrator | 14:02:40.201 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=4422e358-8262-443b-9108-23bd618b7783] 2025-06-11 14:02:40.216458 | orchestrator | 14:02:40.216 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-11 14:02:40.353427 | orchestrator | 14:02:40.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=2d82ea49-5e54-4df6-bc93-42f82dfab4af] 2025-06-11 14:02:40.360090 | orchestrator | 14:02:40.359 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=402bf6bd-e183-4d8c-bdf5-de4b47ba9a9a] 2025-06-11 14:02:40.376455 | orchestrator | 14:02:40.376 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-11 14:02:40.378176 | orchestrator | 14:02:40.378 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-11 14:02:40.504139 | orchestrator | 14:02:40.503 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=4611d154-2518-4b61-81cf-42a272b6ed4c] 2025-06-11 14:02:40.520625 | orchestrator | 14:02:40.520 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-11 14:02:40.567285 | orchestrator | 14:02:40.566 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6b811bc6-77fc-452c-9649-426885c6483a] 2025-06-11 14:02:40.585469 | orchestrator | 14:02:40.585 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-11 14:02:40.681775 | orchestrator | 14:02:40.681 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=5500422d-47cd-466f-835f-08a6ccabf5f9] 2025-06-11 14:02:40.697663 | orchestrator | 14:02:40.697 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-11 14:02:40.731828 | orchestrator | 14:02:40.731 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=0952ea63-bc9b-4320-9a81-7441553e7333] 2025-06-11 14:02:40.840337 | orchestrator | 14:02:40.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=dc6a1018-0a8d-48a7-8465-b961e0979f7d] 2025-06-11 14:02:45.980038 | orchestrator | 14:02:45.979 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=48fbcd2d-fd3e-45ab-8675-cdf8b7b6b27c] 2025-06-11 14:02:46.039158 | orchestrator | 14:02:46.038 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=63011a3b-0b6c-4e0b-837f-da60af87047f] 2025-06-11 14:02:46.159532 | orchestrator | 14:02:46.159 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=5e4cb412-5143-48b5-aaaa-5a969bfbb297] 2025-06-11 14:02:46.246344 | orchestrator | 14:02:46.245 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=ccac584e-4850-4d4f-be37-c3e3caf53f46] 2025-06-11 14:02:46.667541 | orchestrator | 14:02:46.667 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ad9d4087-a651-4b6d-a662-1242f861444c] 2025-06-11 14:02:46.692453 | orchestrator | 14:02:46.692 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=7f47c4f7-62d9-4a3b-a74d-b6c43f3c5427] 2025-06-11 14:02:46.713679 | orchestrator | 14:02:46.713 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=f40b00da-502b-45fb-b825-13239028b3db] 2025-06-11 14:02:46.983491 | orchestrator | 14:02:46.983 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=7eb6ce59-bb62-4615-a0d0-2751e19c5814] 2025-06-11 14:02:47.012352 | orchestrator | 14:02:47.012 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-11 14:02:47.024184 | orchestrator | 14:02:47.024 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-11 14:02:47.025110 | orchestrator | 14:02:47.025 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-11 14:02:47.028397 | orchestrator | 14:02:47.028 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-11 14:02:47.032630 | orchestrator | 14:02:47.032 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-11 14:02:47.034342 | orchestrator | 14:02:47.034 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-11 14:02:47.034489 | orchestrator | 14:02:47.034 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-11 14:02:53.577287 | orchestrator | 14:02:53.576 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=d224023e-3853-4c1a-9a8e-0a722ce2fd02] 2025-06-11 14:02:53.588039 | orchestrator | 14:02:53.587 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-11 14:02:53.599367 | orchestrator | 14:02:53.599 STDOUT terraform: local_file.inventory: Creating... 2025-06-11 14:02:53.600222 | orchestrator | 14:02:53.600 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-11 14:02:53.605558 | orchestrator | 14:02:53.605 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=173946c6398f9c247051033037d6704be2ff1ed9] 2025-06-11 14:02:53.607094 | orchestrator | 14:02:53.606 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=0eb9516a9ea5c0f85a6a8f01d906c736f07e6bdc] 2025-06-11 14:02:54.401750 | orchestrator | 14:02:54.401 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=d224023e-3853-4c1a-9a8e-0a722ce2fd02] 2025-06-11 14:02:57.033306 | orchestrator | 14:02:57.032 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-11 14:02:57.033434 | orchestrator | 14:02:57.033 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-11 14:02:57.033454 | orchestrator | 14:02:57.033 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-11 14:02:57.034366 | orchestrator | 14:02:57.033 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-11 14:02:57.040502 | orchestrator | 14:02:57.040 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-11 14:02:57.041727 | orchestrator | 14:02:57.041 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-11 14:03:07.036108 | orchestrator | 14:03:07.035 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-11 14:03:07.036243 | orchestrator | 14:03:07.035 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-11 14:03:07.036304 | orchestrator | 14:03:07.035 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-11 14:03:07.036324 | orchestrator | 14:03:07.036 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-11 14:03:07.041354 | orchestrator | 14:03:07.041 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-11 14:03:07.042362 | orchestrator | 14:03:07.042 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-11 14:03:07.575735 | orchestrator | 14:03:07.575 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=22f9e9b8-4a63-481d-b8a3-50fe8240feb7] 2025-06-11 14:03:07.661906 | orchestrator | 14:03:07.661 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=d502b7a9-53e8-4052-b404-338f7aeb0522] 2025-06-11 14:03:07.679179 | orchestrator | 14:03:07.678 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=c10ce55b-d6f6-4e30-9a11-7accc1b45789] 2025-06-11 14:03:07.691063 | orchestrator | 14:03:07.690 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=bed82372-62f2-4274-8fd9-61a582f1f62e] 2025-06-11 14:03:17.037825 | orchestrator | 14:03:17.037 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-11 14:03:17.043071 | orchestrator | 14:03:17.042 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-11 14:03:17.828459 | orchestrator | 14:03:17.828 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=0d587e32-edcd-4a1c-9c28-1b7466ec82a5] 2025-06-11 14:03:17.884612 | orchestrator | 14:03:17.884 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=9c71b9f4-6053-4475-a345-47480412f660] 2025-06-11 14:03:17.914736 | orchestrator | 14:03:17.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-11 14:03:17.914825 | orchestrator | 14:03:17.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-11 14:03:17.921282 | orchestrator | 14:03:17.921 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-11 14:03:17.929214 | orchestrator | 14:03:17.929 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-11 14:03:17.932489 | orchestrator | 14:03:17.932 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1123703099494871343] 2025-06-11 14:03:17.932649 | orchestrator | 14:03:17.932 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-11 14:03:17.938333 | orchestrator | 14:03:17.937 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-11 14:03:17.948073 | orchestrator | 14:03:17.947 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-11 14:03:17.948981 | orchestrator | 14:03:17.948 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-11 14:03:17.955763 | orchestrator | 14:03:17.951 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-11 14:03:17.956080 | orchestrator | 14:03:17.955 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-11 14:03:17.967964 | orchestrator | 14:03:17.967 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-11 14:03:23.449187 | orchestrator | 14:03:23.448 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=c10ce55b-d6f6-4e30-9a11-7accc1b45789/98e4ef65-326b-406b-8d68-9bbb471a6ffc] 2025-06-11 14:03:23.481282 | orchestrator | 14:03:23.480 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=bed82372-62f2-4274-8fd9-61a582f1f62e/e952eadf-b7fa-49e6-b121-e808f2d1456b] 2025-06-11 14:03:23.481611 | orchestrator | 14:03:23.481 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=d502b7a9-53e8-4052-b404-338f7aeb0522/0531c1ed-639b-4ab3-bbe7-14f10d387a86] 2025-06-11 14:03:23.506133 | orchestrator | 14:03:23.505 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=d502b7a9-53e8-4052-b404-338f7aeb0522/75267c96-c7d6-45ef-a5a6-94b8e66fe961] 2025-06-11 14:03:23.510976 | orchestrator | 14:03:23.510 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=c10ce55b-d6f6-4e30-9a11-7accc1b45789/1d2dd3c0-811b-40b4-99af-5946e13dbfd3] 2025-06-11 14:03:23.511931 | orchestrator | 14:03:23.511 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=bed82372-62f2-4274-8fd9-61a582f1f62e/5fa61c96-5ca4-4fa7-9393-6e2780ce67d9] 2025-06-11 14:03:23.561061 | orchestrator | 14:03:23.560 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=bed82372-62f2-4274-8fd9-61a582f1f62e/f26631de-4d53-47c9-822c-cbb2033e0b86] 2025-06-11 14:03:23.562266 | orchestrator | 14:03:23.561 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=d502b7a9-53e8-4052-b404-338f7aeb0522/df292424-6e82-4e61-a52c-dd60099c8b3b] 2025-06-11 14:03:23.588158 | orchestrator | 14:03:23.587 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=c10ce55b-d6f6-4e30-9a11-7accc1b45789/997790a1-2284-4ae8-ae59-5b744e390299] 2025-06-11 14:03:27.968460 | orchestrator | 14:03:27.968 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-11 14:03:37.968819 | orchestrator | 14:03:37.968 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-11 14:03:38.393119 | orchestrator | 14:03:38.392 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=7c6337ee-ab0c-47c3-b6e6-30d849955095] 2025-06-11 14:03:38.415808 | orchestrator | 14:03:38.415 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-11 14:03:38.415877 | orchestrator | 14:03:38.415 STDOUT terraform: Outputs: 2025-06-11 14:03:38.415886 | orchestrator | 14:03:38.415 STDOUT terraform: manager_address = 2025-06-11 14:03:38.415893 | orchestrator | 14:03:38.415 STDOUT terraform: private_key = 2025-06-11 14:03:38.572708 | orchestrator | ok: Runtime: 0:01:33.108086 2025-06-11 14:03:38.612115 | 2025-06-11 14:03:38.612470 | TASK [Create infrastructure (stable)] 2025-06-11 14:03:39.161752 | orchestrator | skipping: Conditional result was False 2025-06-11 14:03:39.177514 | 2025-06-11 14:03:39.177683 | TASK [Fetch manager address] 2025-06-11 14:03:39.604683 | orchestrator | ok 2025-06-11 14:03:39.611867 | 2025-06-11 14:03:39.612024 | TASK [Set manager_host address] 2025-06-11 14:03:39.690445 | orchestrator | ok 2025-06-11 14:03:39.699786 | 2025-06-11 14:03:39.700059 | LOOP [Update ansible collections] 2025-06-11 14:03:41.276386 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-11 14:03:41.276726 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-11 14:03:41.276777 | orchestrator | Starting galaxy collection install process 2025-06-11 14:03:41.276809 | orchestrator | Process install dependency map 2025-06-11 14:03:41.276836 | orchestrator | Starting collection install process 2025-06-11 14:03:41.276862 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-06-11 14:03:41.276941 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-06-11 14:03:41.276977 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-11 14:03:41.277037 | orchestrator | ok: Item: commons Runtime: 0:00:01.233486 2025-06-11 14:03:42.235845 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-11 14:03:42.235996 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-11 14:03:42.236029 | orchestrator | Starting galaxy collection install process 2025-06-11 14:03:42.236053 | orchestrator | Process install dependency map 2025-06-11 14:03:42.236074 | orchestrator | Starting collection install process 2025-06-11 14:03:42.236094 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-06-11 14:03:42.236115 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-06-11 14:03:42.236134 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-11 14:03:42.236166 | orchestrator | ok: Item: services Runtime: 0:00:00.705661 2025-06-11 14:03:42.253813 | 2025-06-11 14:03:42.254047 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-11 14:03:52.843675 | orchestrator | ok 2025-06-11 14:03:52.856347 | 2025-06-11 14:03:52.856507 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-11 14:04:52.905778 | orchestrator | ok 2025-06-11 14:04:52.916938 | 2025-06-11 14:04:52.917074 | TASK [Fetch manager ssh hostkey] 2025-06-11 14:04:54.491608 | orchestrator | Output suppressed because no_log was given 2025-06-11 14:04:54.499105 | 2025-06-11 14:04:54.499244 | TASK [Get ssh keypair from terraform environment] 2025-06-11 14:04:55.038523 | orchestrator | ok: Runtime: 0:00:00.010963 2025-06-11 14:04:55.053878 | 2025-06-11 14:04:55.054127 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-11 14:04:55.106937 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-11 14:04:55.121953 | 2025-06-11 14:04:55.122147 | TASK [Run manager part 0] 2025-06-11 14:04:56.242329 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-11 14:04:56.285858 | orchestrator | 2025-06-11 14:04:56.285905 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-11 14:04:56.285912 | orchestrator | 2025-06-11 14:04:56.285924 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-11 14:04:58.570223 | orchestrator | ok: [testbed-manager] 2025-06-11 14:04:58.570282 | orchestrator | 2025-06-11 14:04:58.570307 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-11 14:04:58.570319 | orchestrator | 2025-06-11 14:04:58.570329 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:05:00.362204 | orchestrator | ok: [testbed-manager] 2025-06-11 14:05:00.362254 | orchestrator | 2025-06-11 14:05:00.362315 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-11 14:05:01.004493 | orchestrator | ok: [testbed-manager] 2025-06-11 14:05:01.004547 | orchestrator | 2025-06-11 14:05:01.004556 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-11 14:05:01.180008 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.180060 | orchestrator | 2025-06-11 14:05:01.180072 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-11 14:05:01.236279 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.236330 | orchestrator | 2025-06-11 14:05:01.236338 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-11 14:05:01.262300 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.262338 | orchestrator | 2025-06-11 14:05:01.262344 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-11 14:05:01.289827 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.289864 | orchestrator | 2025-06-11 14:05:01.289870 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-11 14:05:01.329522 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.329570 | orchestrator | 2025-06-11 14:05:01.329579 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-11 14:05:01.368499 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.368546 | orchestrator | 2025-06-11 14:05:01.368555 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-11 14:05:01.400543 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:05:01.400591 | orchestrator | 2025-06-11 14:05:01.400600 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-11 14:05:02.218233 | orchestrator | changed: [testbed-manager] 2025-06-11 14:05:02.218403 | orchestrator | 2025-06-11 14:05:02.218415 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-11 14:08:17.691880 | orchestrator | changed: [testbed-manager] 2025-06-11 14:08:17.691936 | orchestrator | 2025-06-11 14:08:17.691946 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-11 14:09:33.759956 | orchestrator | changed: [testbed-manager] 2025-06-11 14:09:33.760002 | orchestrator | 2025-06-11 14:09:33.760069 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-11 14:09:53.930172 | orchestrator | changed: [testbed-manager] 2025-06-11 14:09:53.930217 | orchestrator | 2025-06-11 14:09:53.930227 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-11 14:10:03.878597 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:03.878684 | orchestrator | 2025-06-11 14:10:03.878701 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-11 14:10:03.931572 | orchestrator | ok: [testbed-manager] 2025-06-11 14:10:03.931665 | orchestrator | 2025-06-11 14:10:03.931682 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-11 14:10:04.776250 | orchestrator | ok: [testbed-manager] 2025-06-11 14:10:04.776301 | orchestrator | 2025-06-11 14:10:04.776313 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-11 14:10:05.554697 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:05.554742 | orchestrator | 2025-06-11 14:10:05.554751 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-11 14:10:12.041430 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:12.041479 | orchestrator | 2025-06-11 14:10:12.041504 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-11 14:10:18.419185 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:18.419228 | orchestrator | 2025-06-11 14:10:18.419238 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-11 14:10:21.006523 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:21.006618 | orchestrator | 2025-06-11 14:10:21.006636 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-11 14:10:22.777460 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:22.777505 | orchestrator | 2025-06-11 14:10:22.777514 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-11 14:10:23.946904 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-11 14:10:23.947126 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-11 14:10:23.947145 | orchestrator | 2025-06-11 14:10:23.947158 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-11 14:10:23.988400 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-11 14:10:23.988464 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-11 14:10:23.988477 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-11 14:10:23.988489 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-11 14:10:27.645939 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-11 14:10:27.646089 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-11 14:10:27.646110 | orchestrator | 2025-06-11 14:10:27.646123 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-11 14:10:28.204743 | orchestrator | changed: [testbed-manager] 2025-06-11 14:10:28.204786 | orchestrator | 2025-06-11 14:10:28.204794 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-11 14:15:48.630233 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-11 14:15:48.630347 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-11 14:15:48.630368 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-11 14:15:48.630381 | orchestrator | 2025-06-11 14:15:48.630394 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-11 14:15:50.952225 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-11 14:15:50.952326 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-11 14:15:50.952341 | orchestrator | 2025-06-11 14:15:50.952354 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-11 14:15:50.952365 | orchestrator | 2025-06-11 14:15:50.952377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:15:52.370803 | orchestrator | ok: [testbed-manager] 2025-06-11 14:15:52.370836 | orchestrator | 2025-06-11 14:15:52.370843 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-11 14:15:52.428162 | orchestrator | ok: [testbed-manager] 2025-06-11 14:15:52.428207 | orchestrator | 2025-06-11 14:15:52.428216 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-11 14:15:52.499264 | orchestrator | ok: [testbed-manager] 2025-06-11 14:15:52.499307 | orchestrator | 2025-06-11 14:15:52.499316 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-11 14:15:53.234083 | orchestrator | changed: [testbed-manager] 2025-06-11 14:15:53.234130 | orchestrator | 2025-06-11 14:15:53.234140 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-11 14:15:53.940019 | orchestrator | changed: [testbed-manager] 2025-06-11 14:15:53.940081 | orchestrator | 2025-06-11 14:15:53.940093 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-11 14:15:55.317170 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-11 14:15:55.317241 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-11 14:15:55.317264 | orchestrator | 2025-06-11 14:15:55.317303 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-11 14:15:56.708087 | orchestrator | changed: [testbed-manager] 2025-06-11 14:15:56.708205 | orchestrator | 2025-06-11 14:15:56.708224 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-11 14:15:58.455412 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:15:58.455492 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-11 14:15:58.455504 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:15:58.455515 | orchestrator | 2025-06-11 14:15:58.455526 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-11 14:15:59.025079 | orchestrator | changed: [testbed-manager] 2025-06-11 14:15:59.025176 | orchestrator | 2025-06-11 14:15:59.025193 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-11 14:15:59.101019 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:15:59.101102 | orchestrator | 2025-06-11 14:15:59.101116 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-11 14:15:59.978587 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:15:59.978680 | orchestrator | changed: [testbed-manager] 2025-06-11 14:15:59.978697 | orchestrator | 2025-06-11 14:15:59.978711 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-11 14:16:00.010363 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:00.010435 | orchestrator | 2025-06-11 14:16:00.010449 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-11 14:16:00.044372 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:00.044474 | orchestrator | 2025-06-11 14:16:00.044490 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-11 14:16:00.075659 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:00.075727 | orchestrator | 2025-06-11 14:16:00.075740 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-11 14:16:00.129446 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:00.129543 | orchestrator | 2025-06-11 14:16:00.129560 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-11 14:16:00.830406 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:00.830478 | orchestrator | 2025-06-11 14:16:00.830488 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-11 14:16:00.830496 | orchestrator | 2025-06-11 14:16:00.830504 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:16:02.174873 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:02.174964 | orchestrator | 2025-06-11 14:16:02.175010 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-11 14:16:03.144700 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:03.144796 | orchestrator | 2025-06-11 14:16:03.144813 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:16:03.144827 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-11 14:16:03.144838 | orchestrator | 2025-06-11 14:16:03.565892 | orchestrator | ok: Runtime: 0:11:07.802576 2025-06-11 14:16:03.584186 | 2025-06-11 14:16:03.584331 | TASK [Point out that the log in on the manager is now possible] 2025-06-11 14:16:03.631527 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-11 14:16:03.641968 | 2025-06-11 14:16:03.642085 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-11 14:16:03.688742 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-11 14:16:03.698069 | 2025-06-11 14:16:03.698200 | TASK [Run manager part 1 + 2] 2025-06-11 14:16:04.603026 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-11 14:16:04.657365 | orchestrator | 2025-06-11 14:16:04.657413 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-11 14:16:04.657419 | orchestrator | 2025-06-11 14:16:04.657432 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:16:07.539357 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:07.539409 | orchestrator | 2025-06-11 14:16:07.539432 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-11 14:16:07.583022 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:07.583074 | orchestrator | 2025-06-11 14:16:07.583085 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-11 14:16:07.627042 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:07.627094 | orchestrator | 2025-06-11 14:16:07.627104 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-11 14:16:07.667430 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:07.667482 | orchestrator | 2025-06-11 14:16:07.667492 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-11 14:16:07.730055 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:07.730113 | orchestrator | 2025-06-11 14:16:07.730123 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-11 14:16:07.787300 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:07.787352 | orchestrator | 2025-06-11 14:16:07.787362 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-11 14:16:07.830049 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-11 14:16:07.830090 | orchestrator | 2025-06-11 14:16:07.830096 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-11 14:16:08.538818 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:08.538881 | orchestrator | 2025-06-11 14:16:08.538894 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-11 14:16:08.590338 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:08.590391 | orchestrator | 2025-06-11 14:16:08.590400 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-11 14:16:09.949502 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:09.949565 | orchestrator | 2025-06-11 14:16:09.949576 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-11 14:16:10.546062 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:10.546148 | orchestrator | 2025-06-11 14:16:10.546158 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-11 14:16:11.715600 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:11.715652 | orchestrator | 2025-06-11 14:16:11.715661 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-11 14:16:24.533605 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:24.533658 | orchestrator | 2025-06-11 14:16:24.533666 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-11 14:16:25.215131 | orchestrator | ok: [testbed-manager] 2025-06-11 14:16:25.215222 | orchestrator | 2025-06-11 14:16:25.215241 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-11 14:16:25.270432 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:25.270507 | orchestrator | 2025-06-11 14:16:25.270521 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-11 14:16:26.256730 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:26.256772 | orchestrator | 2025-06-11 14:16:26.256778 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-11 14:16:27.235070 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:27.235169 | orchestrator | 2025-06-11 14:16:27.235185 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-11 14:16:27.811707 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:27.811804 | orchestrator | 2025-06-11 14:16:27.811821 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-11 14:16:27.852791 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-11 14:16:27.852908 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-11 14:16:27.852924 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-11 14:16:27.852935 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-11 14:16:29.882865 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:29.882924 | orchestrator | 2025-06-11 14:16:29.882931 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-11 14:16:38.959378 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-11 14:16:38.959538 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-11 14:16:38.959555 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-11 14:16:38.959565 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-11 14:16:38.959584 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-11 14:16:38.959594 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-11 14:16:38.959604 | orchestrator | 2025-06-11 14:16:38.959615 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-11 14:16:40.036515 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:40.036611 | orchestrator | 2025-06-11 14:16:40.036628 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-11 14:16:40.082887 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:40.082974 | orchestrator | 2025-06-11 14:16:40.083019 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-11 14:16:43.264959 | orchestrator | changed: [testbed-manager] 2025-06-11 14:16:43.265075 | orchestrator | 2025-06-11 14:16:43.265104 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-11 14:16:43.313098 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:16:43.313192 | orchestrator | 2025-06-11 14:16:43.313210 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-11 14:18:21.885135 | orchestrator | changed: [testbed-manager] 2025-06-11 14:18:21.885241 | orchestrator | 2025-06-11 14:18:21.885261 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-11 14:18:23.035268 | orchestrator | ok: [testbed-manager] 2025-06-11 14:18:23.035358 | orchestrator | 2025-06-11 14:18:23.035376 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:18:23.035390 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-11 14:18:23.035401 | orchestrator | 2025-06-11 14:18:23.333540 | orchestrator | ok: Runtime: 0:02:19.123356 2025-06-11 14:18:23.349684 | 2025-06-11 14:18:23.349818 | TASK [Reboot manager] 2025-06-11 14:18:24.892993 | orchestrator | ok: Runtime: 0:00:00.966937 2025-06-11 14:18:24.914027 | 2025-06-11 14:18:24.914211 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-11 14:18:41.369916 | orchestrator | ok 2025-06-11 14:18:41.379886 | 2025-06-11 14:18:41.380057 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-11 14:19:41.419751 | orchestrator | ok 2025-06-11 14:19:41.427121 | 2025-06-11 14:19:41.427229 | TASK [Deploy manager + bootstrap nodes] 2025-06-11 14:19:43.996649 | orchestrator | 2025-06-11 14:19:43.997032 | orchestrator | # DEPLOY MANAGER 2025-06-11 14:19:43.997068 | orchestrator | 2025-06-11 14:19:43.997083 | orchestrator | + set -e 2025-06-11 14:19:43.997097 | orchestrator | + echo 2025-06-11 14:19:43.997110 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-11 14:19:43.997128 | orchestrator | + echo 2025-06-11 14:19:43.997182 | orchestrator | + cat /opt/manager-vars.sh 2025-06-11 14:19:44.000260 | orchestrator | export NUMBER_OF_NODES=6 2025-06-11 14:19:44.000302 | orchestrator | 2025-06-11 14:19:44.000316 | orchestrator | export CEPH_VERSION=reef 2025-06-11 14:19:44.000329 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-11 14:19:44.000343 | orchestrator | export MANAGER_VERSION=latest 2025-06-11 14:19:44.000377 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-11 14:19:44.000389 | orchestrator | 2025-06-11 14:19:44.000408 | orchestrator | export ARA=false 2025-06-11 14:19:44.000419 | orchestrator | export DEPLOY_MODE=manager 2025-06-11 14:19:44.000437 | orchestrator | export TEMPEST=false 2025-06-11 14:19:44.000449 | orchestrator | export IS_ZUUL=true 2025-06-11 14:19:44.000460 | orchestrator | 2025-06-11 14:19:44.000478 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:19:44.000490 | orchestrator | export EXTERNAL_API=false 2025-06-11 14:19:44.000501 | orchestrator | 2025-06-11 14:19:44.000512 | orchestrator | export IMAGE_USER=ubuntu 2025-06-11 14:19:44.000525 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-11 14:19:44.000536 | orchestrator | 2025-06-11 14:19:44.000547 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-11 14:19:44.000567 | orchestrator | 2025-06-11 14:19:44.000578 | orchestrator | + echo 2025-06-11 14:19:44.000594 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 14:19:44.001470 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 14:19:44.001490 | orchestrator | ++ INTERACTIVE=false 2025-06-11 14:19:44.001503 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 14:19:44.001516 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 14:19:44.001711 | orchestrator | + source /opt/manager-vars.sh 2025-06-11 14:19:44.001729 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-11 14:19:44.001743 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-11 14:19:44.001755 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-11 14:19:44.001767 | orchestrator | ++ CEPH_VERSION=reef 2025-06-11 14:19:44.001778 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-11 14:19:44.001790 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-11 14:19:44.001802 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 14:19:44.001814 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 14:19:44.001826 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-11 14:19:44.001847 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-11 14:19:44.001859 | orchestrator | ++ export ARA=false 2025-06-11 14:19:44.001871 | orchestrator | ++ ARA=false 2025-06-11 14:19:44.001883 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-11 14:19:44.001895 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-11 14:19:44.001913 | orchestrator | ++ export TEMPEST=false 2025-06-11 14:19:44.001924 | orchestrator | ++ TEMPEST=false 2025-06-11 14:19:44.001935 | orchestrator | ++ export IS_ZUUL=true 2025-06-11 14:19:44.001945 | orchestrator | ++ IS_ZUUL=true 2025-06-11 14:19:44.001982 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:19:44.001993 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:19:44.002004 | orchestrator | ++ export EXTERNAL_API=false 2025-06-11 14:19:44.002015 | orchestrator | ++ EXTERNAL_API=false 2025-06-11 14:19:44.002122 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-11 14:19:44.002133 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-11 14:19:44.002150 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-11 14:19:44.002161 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-11 14:19:44.002172 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-11 14:19:44.002183 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-11 14:19:44.002194 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-11 14:19:44.059854 | orchestrator | + docker version 2025-06-11 14:19:44.336567 | orchestrator | Client: Docker Engine - Community 2025-06-11 14:19:44.336676 | orchestrator | Version: 27.5.1 2025-06-11 14:19:44.336692 | orchestrator | API version: 1.47 2025-06-11 14:19:44.336703 | orchestrator | Go version: go1.22.11 2025-06-11 14:19:44.336714 | orchestrator | Git commit: 9f9e405 2025-06-11 14:19:44.336725 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-11 14:19:44.336755 | orchestrator | OS/Arch: linux/amd64 2025-06-11 14:19:44.336777 | orchestrator | Context: default 2025-06-11 14:19:44.336788 | orchestrator | 2025-06-11 14:19:44.336799 | orchestrator | Server: Docker Engine - Community 2025-06-11 14:19:44.336810 | orchestrator | Engine: 2025-06-11 14:19:44.336822 | orchestrator | Version: 27.5.1 2025-06-11 14:19:44.336833 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-11 14:19:44.336887 | orchestrator | Go version: go1.22.11 2025-06-11 14:19:44.336899 | orchestrator | Git commit: 4c9b3b0 2025-06-11 14:19:44.336910 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-11 14:19:44.336920 | orchestrator | OS/Arch: linux/amd64 2025-06-11 14:19:44.336931 | orchestrator | Experimental: false 2025-06-11 14:19:44.336942 | orchestrator | containerd: 2025-06-11 14:19:44.336975 | orchestrator | Version: 1.7.27 2025-06-11 14:19:44.336987 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-11 14:19:44.336998 | orchestrator | runc: 2025-06-11 14:19:44.337060 | orchestrator | Version: 1.2.5 2025-06-11 14:19:44.337074 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-11 14:19:44.337085 | orchestrator | docker-init: 2025-06-11 14:19:44.337096 | orchestrator | Version: 0.19.0 2025-06-11 14:19:44.337108 | orchestrator | GitCommit: de40ad0 2025-06-11 14:19:44.341070 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-11 14:19:44.351418 | orchestrator | + set -e 2025-06-11 14:19:44.351482 | orchestrator | + source /opt/manager-vars.sh 2025-06-11 14:19:44.351494 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-11 14:19:44.351505 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-11 14:19:44.351516 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-11 14:19:44.351526 | orchestrator | ++ CEPH_VERSION=reef 2025-06-11 14:19:44.351537 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-11 14:19:44.351549 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-11 14:19:44.351560 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 14:19:44.351571 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 14:19:44.351581 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-11 14:19:44.351592 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-11 14:19:44.351603 | orchestrator | ++ export ARA=false 2025-06-11 14:19:44.351614 | orchestrator | ++ ARA=false 2025-06-11 14:19:44.351624 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-11 14:19:44.351635 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-11 14:19:44.351646 | orchestrator | ++ export TEMPEST=false 2025-06-11 14:19:44.351657 | orchestrator | ++ TEMPEST=false 2025-06-11 14:19:44.351667 | orchestrator | ++ export IS_ZUUL=true 2025-06-11 14:19:44.351678 | orchestrator | ++ IS_ZUUL=true 2025-06-11 14:19:44.351695 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:19:44.351707 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:19:44.351717 | orchestrator | ++ export EXTERNAL_API=false 2025-06-11 14:19:44.351728 | orchestrator | ++ EXTERNAL_API=false 2025-06-11 14:19:44.351739 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-11 14:19:44.351749 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-11 14:19:44.351769 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-11 14:19:44.351780 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-11 14:19:44.351791 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-11 14:19:44.351802 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-11 14:19:44.351813 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 14:19:44.351823 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 14:19:44.351834 | orchestrator | ++ INTERACTIVE=false 2025-06-11 14:19:44.351844 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 14:19:44.351859 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 14:19:44.351870 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 14:19:44.351881 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 14:19:44.351892 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-11 14:19:44.360302 | orchestrator | + set -e 2025-06-11 14:19:44.360371 | orchestrator | + VERSION=reef 2025-06-11 14:19:44.361593 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-11 14:19:44.367444 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-11 14:19:44.367503 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-11 14:19:44.372886 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-11 14:19:44.379828 | orchestrator | + set -e 2025-06-11 14:19:44.380455 | orchestrator | + VERSION=2024.2 2025-06-11 14:19:44.381200 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-11 14:19:44.385182 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-11 14:19:44.385230 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-11 14:19:44.389701 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-11 14:19:44.390654 | orchestrator | ++ semver latest 7.0.0 2025-06-11 14:19:44.451038 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-11 14:19:44.451124 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 14:19:44.451139 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-11 14:19:44.451150 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-11 14:19:44.546787 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-11 14:19:44.549199 | orchestrator | + source /opt/venv/bin/activate 2025-06-11 14:19:44.550440 | orchestrator | ++ deactivate nondestructive 2025-06-11 14:19:44.550465 | orchestrator | ++ '[' -n '' ']' 2025-06-11 14:19:44.550472 | orchestrator | ++ '[' -n '' ']' 2025-06-11 14:19:44.550477 | orchestrator | ++ hash -r 2025-06-11 14:19:44.550637 | orchestrator | ++ '[' -n '' ']' 2025-06-11 14:19:44.550650 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-11 14:19:44.550658 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-11 14:19:44.550665 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-11 14:19:44.550680 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-11 14:19:44.550692 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-11 14:19:44.550699 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-11 14:19:44.550706 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-11 14:19:44.550715 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-11 14:19:44.550726 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-11 14:19:44.550733 | orchestrator | ++ export PATH 2025-06-11 14:19:44.550941 | orchestrator | ++ '[' -n '' ']' 2025-06-11 14:19:44.550990 | orchestrator | ++ '[' -z '' ']' 2025-06-11 14:19:44.550998 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-11 14:19:44.551005 | orchestrator | ++ PS1='(venv) ' 2025-06-11 14:19:44.551012 | orchestrator | ++ export PS1 2025-06-11 14:19:44.551019 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-11 14:19:44.551028 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-11 14:19:44.551035 | orchestrator | ++ hash -r 2025-06-11 14:19:44.551176 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-11 14:19:45.679190 | orchestrator | 2025-06-11 14:19:45.679305 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-11 14:19:45.679321 | orchestrator | 2025-06-11 14:19:45.679332 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-11 14:19:46.249568 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:46.249702 | orchestrator | 2025-06-11 14:19:46.249729 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-11 14:19:47.270980 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:47.271095 | orchestrator | 2025-06-11 14:19:47.271112 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-11 14:19:47.271125 | orchestrator | 2025-06-11 14:19:47.271137 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:19:49.811891 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:49.812058 | orchestrator | 2025-06-11 14:19:49.812076 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-11 14:19:49.871081 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:49.871183 | orchestrator | 2025-06-11 14:19:49.871205 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-11 14:19:50.355546 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:50.355646 | orchestrator | 2025-06-11 14:19:50.355659 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-11 14:19:50.399667 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:19:50.399770 | orchestrator | 2025-06-11 14:19:50.399785 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-11 14:19:50.752556 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:50.752671 | orchestrator | 2025-06-11 14:19:50.752689 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-11 14:19:50.816089 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:19:50.816176 | orchestrator | 2025-06-11 14:19:50.816186 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-11 14:19:51.199368 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:51.199473 | orchestrator | 2025-06-11 14:19:51.199489 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-11 14:19:51.311837 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:19:51.311989 | orchestrator | 2025-06-11 14:19:51.312007 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-11 14:19:51.312020 | orchestrator | 2025-06-11 14:19:51.312034 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:19:53.243388 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:53.243502 | orchestrator | 2025-06-11 14:19:53.243518 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-11 14:19:53.339921 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-11 14:19:53.340066 | orchestrator | 2025-06-11 14:19:53.340084 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-11 14:19:53.397477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-11 14:19:53.397574 | orchestrator | 2025-06-11 14:19:53.397588 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-11 14:19:54.595538 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-11 14:19:54.595662 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-11 14:19:54.595688 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-11 14:19:54.595710 | orchestrator | 2025-06-11 14:19:54.595725 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-11 14:19:56.516021 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-11 14:19:56.516137 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-11 14:19:56.516156 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-11 14:19:56.516169 | orchestrator | 2025-06-11 14:19:56.516181 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-11 14:19:57.167299 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:19:57.167388 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:57.167404 | orchestrator | 2025-06-11 14:19:57.167416 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-11 14:19:57.758632 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:19:57.758725 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:57.758741 | orchestrator | 2025-06-11 14:19:57.758753 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-11 14:19:57.815562 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:19:57.815641 | orchestrator | 2025-06-11 14:19:57.815655 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-11 14:19:58.161337 | orchestrator | ok: [testbed-manager] 2025-06-11 14:19:58.161421 | orchestrator | 2025-06-11 14:19:58.161434 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-11 14:19:58.226673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-11 14:19:58.226748 | orchestrator | 2025-06-11 14:19:58.226761 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-11 14:19:59.248650 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:59.248745 | orchestrator | 2025-06-11 14:19:59.248761 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-11 14:19:59.997464 | orchestrator | changed: [testbed-manager] 2025-06-11 14:19:59.997552 | orchestrator | 2025-06-11 14:19:59.997567 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-11 14:20:10.477902 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:10.478095 | orchestrator | 2025-06-11 14:20:10.478115 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-11 14:20:10.524918 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:20:10.525010 | orchestrator | 2025-06-11 14:20:10.525027 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-11 14:20:10.525040 | orchestrator | 2025-06-11 14:20:10.525052 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:20:12.206548 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:12.206655 | orchestrator | 2025-06-11 14:20:12.206703 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-11 14:20:12.317592 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-11 14:20:12.317705 | orchestrator | 2025-06-11 14:20:12.317719 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-11 14:20:12.376498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-11 14:20:12.376607 | orchestrator | 2025-06-11 14:20:12.376622 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-11 14:20:14.800716 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:14.800832 | orchestrator | 2025-06-11 14:20:14.800850 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-11 14:20:14.856491 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:14.856611 | orchestrator | 2025-06-11 14:20:14.856630 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-11 14:20:14.973564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-11 14:20:14.973678 | orchestrator | 2025-06-11 14:20:14.973694 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-11 14:20:17.756508 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-11 14:20:17.756639 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-11 14:20:17.756656 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-11 14:20:17.756682 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-11 14:20:17.756734 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-11 14:20:17.756747 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-11 14:20:17.756759 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-11 14:20:17.756770 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-11 14:20:17.756782 | orchestrator | 2025-06-11 14:20:17.756794 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-11 14:20:18.358126 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:18.358230 | orchestrator | 2025-06-11 14:20:18.358245 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-11 14:20:18.974390 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:18.974493 | orchestrator | 2025-06-11 14:20:18.974508 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-11 14:20:19.045374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-11 14:20:19.045484 | orchestrator | 2025-06-11 14:20:19.045500 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-11 14:20:20.244148 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-11 14:20:20.244252 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-11 14:20:20.244267 | orchestrator | 2025-06-11 14:20:20.244280 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-11 14:20:20.843258 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:20.843363 | orchestrator | 2025-06-11 14:20:20.843378 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-11 14:20:20.903546 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:20:20.903643 | orchestrator | 2025-06-11 14:20:20.903657 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-11 14:20:20.960506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-11 14:20:20.960613 | orchestrator | 2025-06-11 14:20:20.960628 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-11 14:20:22.321828 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:20:22.321921 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:20:22.321933 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:22.321994 | orchestrator | 2025-06-11 14:20:22.322004 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-11 14:20:22.939436 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:22.939559 | orchestrator | 2025-06-11 14:20:22.939576 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-11 14:20:22.992487 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:20:22.992588 | orchestrator | 2025-06-11 14:20:22.992604 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-11 14:20:23.092090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-11 14:20:23.092187 | orchestrator | 2025-06-11 14:20:23.092202 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-11 14:20:23.637742 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:23.637815 | orchestrator | 2025-06-11 14:20:23.637822 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-11 14:20:24.052033 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:24.052159 | orchestrator | 2025-06-11 14:20:24.052175 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-11 14:20:25.287224 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-11 14:20:25.287331 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-11 14:20:25.287347 | orchestrator | 2025-06-11 14:20:25.287360 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-11 14:20:25.922200 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:25.922306 | orchestrator | 2025-06-11 14:20:25.922322 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-11 14:20:26.317562 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:26.317731 | orchestrator | 2025-06-11 14:20:26.317759 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-11 14:20:26.673606 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:26.673711 | orchestrator | 2025-06-11 14:20:26.673727 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-11 14:20:26.713577 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:20:26.713670 | orchestrator | 2025-06-11 14:20:26.713683 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-11 14:20:26.785173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-11 14:20:26.785272 | orchestrator | 2025-06-11 14:20:26.785287 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-11 14:20:26.825261 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:26.825347 | orchestrator | 2025-06-11 14:20:26.825361 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-11 14:20:28.802286 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-11 14:20:28.802361 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-11 14:20:28.802368 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-11 14:20:28.802373 | orchestrator | 2025-06-11 14:20:28.802379 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-11 14:20:29.498200 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:29.498292 | orchestrator | 2025-06-11 14:20:29.498306 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-11 14:20:30.198884 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:30.199020 | orchestrator | 2025-06-11 14:20:30.199037 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-11 14:20:30.867481 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:30.867586 | orchestrator | 2025-06-11 14:20:30.867605 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-11 14:20:30.943354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-11 14:20:30.943442 | orchestrator | 2025-06-11 14:20:30.943456 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-11 14:20:30.987034 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:30.987119 | orchestrator | 2025-06-11 14:20:30.987132 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-11 14:20:31.695547 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-11 14:20:31.695652 | orchestrator | 2025-06-11 14:20:31.695668 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-11 14:20:31.778425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-11 14:20:31.778524 | orchestrator | 2025-06-11 14:20:31.778540 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-11 14:20:32.490201 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:32.490276 | orchestrator | 2025-06-11 14:20:32.490282 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-11 14:20:33.081447 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:33.081573 | orchestrator | 2025-06-11 14:20:33.081590 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-11 14:20:33.138179 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:20:33.138244 | orchestrator | 2025-06-11 14:20:33.138257 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-11 14:20:33.190438 | orchestrator | ok: [testbed-manager] 2025-06-11 14:20:33.190498 | orchestrator | 2025-06-11 14:20:33.190510 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-11 14:20:33.954244 | orchestrator | changed: [testbed-manager] 2025-06-11 14:20:33.954335 | orchestrator | 2025-06-11 14:20:33.954351 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-11 14:21:41.482600 | orchestrator | changed: [testbed-manager] 2025-06-11 14:21:41.482718 | orchestrator | 2025-06-11 14:21:41.482735 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-11 14:21:42.411862 | orchestrator | ok: [testbed-manager] 2025-06-11 14:21:42.412025 | orchestrator | 2025-06-11 14:21:42.412045 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-11 14:21:42.457624 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:21:42.457719 | orchestrator | 2025-06-11 14:21:42.457731 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-11 14:21:44.888326 | orchestrator | changed: [testbed-manager] 2025-06-11 14:21:44.888432 | orchestrator | 2025-06-11 14:21:44.888446 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-11 14:21:44.942493 | orchestrator | ok: [testbed-manager] 2025-06-11 14:21:44.942596 | orchestrator | 2025-06-11 14:21:44.942614 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-11 14:21:44.942627 | orchestrator | 2025-06-11 14:21:44.942639 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-11 14:21:44.986244 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:21:44.986360 | orchestrator | 2025-06-11 14:21:44.986375 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-11 14:22:45.035664 | orchestrator | Pausing for 60 seconds 2025-06-11 14:22:45.035792 | orchestrator | changed: [testbed-manager] 2025-06-11 14:22:45.035809 | orchestrator | 2025-06-11 14:22:45.035822 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-11 14:22:49.121375 | orchestrator | changed: [testbed-manager] 2025-06-11 14:22:49.121482 | orchestrator | 2025-06-11 14:22:49.121499 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-11 14:23:30.826663 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-11 14:23:30.826810 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-11 14:23:30.826824 | orchestrator | changed: [testbed-manager] 2025-06-11 14:23:30.826838 | orchestrator | 2025-06-11 14:23:30.826850 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-11 14:23:39.813701 | orchestrator | changed: [testbed-manager] 2025-06-11 14:23:39.813811 | orchestrator | 2025-06-11 14:23:39.813819 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-11 14:23:39.907090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-11 14:23:39.907235 | orchestrator | 2025-06-11 14:23:39.907250 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-11 14:23:39.907262 | orchestrator | 2025-06-11 14:23:39.907273 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-11 14:23:39.987295 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:23:39.987410 | orchestrator | 2025-06-11 14:23:39.987423 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:23:39.987438 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-11 14:23:39.987449 | orchestrator | 2025-06-11 14:23:40.150762 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-11 14:23:40.150916 | orchestrator | + deactivate 2025-06-11 14:23:40.150933 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-11 14:23:40.150948 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-11 14:23:40.150959 | orchestrator | + export PATH 2025-06-11 14:23:40.150971 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-11 14:23:40.150982 | orchestrator | + '[' -n '' ']' 2025-06-11 14:23:40.150993 | orchestrator | + hash -r 2025-06-11 14:23:40.151004 | orchestrator | + '[' -n '' ']' 2025-06-11 14:23:40.151015 | orchestrator | + unset VIRTUAL_ENV 2025-06-11 14:23:40.151025 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-11 14:23:40.151036 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-11 14:23:40.151047 | orchestrator | + unset -f deactivate 2025-06-11 14:23:40.151058 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-11 14:23:40.157328 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-11 14:23:40.157366 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-11 14:23:40.157384 | orchestrator | + local max_attempts=60 2025-06-11 14:23:40.157402 | orchestrator | + local name=ceph-ansible 2025-06-11 14:23:40.157420 | orchestrator | + local attempt_num=1 2025-06-11 14:23:40.158662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:23:40.192369 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:23:40.192433 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-11 14:23:40.192442 | orchestrator | + local max_attempts=60 2025-06-11 14:23:40.192450 | orchestrator | + local name=kolla-ansible 2025-06-11 14:23:40.192460 | orchestrator | + local attempt_num=1 2025-06-11 14:23:40.193115 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-11 14:23:40.230096 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:23:40.230175 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-11 14:23:40.230183 | orchestrator | + local max_attempts=60 2025-06-11 14:23:40.230190 | orchestrator | + local name=osism-ansible 2025-06-11 14:23:40.230195 | orchestrator | + local attempt_num=1 2025-06-11 14:23:40.231334 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-11 14:23:40.272315 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:23:40.272402 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-11 14:23:40.272410 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-11 14:23:41.012006 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-11 14:23:41.231694 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-11 14:23:41.231812 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231824 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231833 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-11 14:23:41.231844 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-11 14:23:41.231941 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231962 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231970 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-11 14:23:41.231977 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231984 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-11 14:23:41.231991 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.231998 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-11 14:23:41.232004 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.232010 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.232017 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-11 14:23:41.247151 | orchestrator | ++ semver latest 7.0.0 2025-06-11 14:23:41.305957 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-11 14:23:41.306126 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 14:23:41.306141 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-11 14:23:41.311650 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-11 14:23:43.020839 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:23:43.021028 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:23:43.021042 | orchestrator | Registering Redlock._release_script 2025-06-11 14:23:43.214564 | orchestrator | 2025-06-11 14:23:43 | INFO  | Task 801b30aa-2c71-4e2a-9fc6-364d62fb516e (resolvconf) was prepared for execution. 2025-06-11 14:23:43.214689 | orchestrator | 2025-06-11 14:23:43 | INFO  | It takes a moment until task 801b30aa-2c71-4e2a-9fc6-364d62fb516e (resolvconf) has been started and output is visible here. 2025-06-11 14:23:57.029041 | orchestrator | 2025-06-11 14:23:57.029176 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-11 14:23:57.029194 | orchestrator | 2025-06-11 14:23:57.029207 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:23:57.029221 | orchestrator | Wednesday 11 June 2025 14:23:47 +0000 (0:00:00.162) 0:00:00.162 ******** 2025-06-11 14:23:57.029233 | orchestrator | ok: [testbed-manager] 2025-06-11 14:23:57.029245 | orchestrator | 2025-06-11 14:23:57.029257 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-11 14:23:57.029272 | orchestrator | Wednesday 11 June 2025 14:23:51 +0000 (0:00:03.853) 0:00:04.016 ******** 2025-06-11 14:23:57.029284 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:23:57.029321 | orchestrator | 2025-06-11 14:23:57.029333 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-11 14:23:57.029343 | orchestrator | Wednesday 11 June 2025 14:23:51 +0000 (0:00:00.059) 0:00:04.075 ******** 2025-06-11 14:23:57.029355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-11 14:23:57.029367 | orchestrator | 2025-06-11 14:23:57.029378 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-11 14:23:57.029389 | orchestrator | Wednesday 11 June 2025 14:23:51 +0000 (0:00:00.068) 0:00:04.144 ******** 2025-06-11 14:23:57.029400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-11 14:23:57.029410 | orchestrator | 2025-06-11 14:23:57.029422 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-11 14:23:57.029433 | orchestrator | Wednesday 11 June 2025 14:23:51 +0000 (0:00:00.071) 0:00:04.215 ******** 2025-06-11 14:23:57.029443 | orchestrator | ok: [testbed-manager] 2025-06-11 14:23:57.029454 | orchestrator | 2025-06-11 14:23:57.029465 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-11 14:23:57.029476 | orchestrator | Wednesday 11 June 2025 14:23:52 +0000 (0:00:01.159) 0:00:05.375 ******** 2025-06-11 14:23:57.029488 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:23:57.029501 | orchestrator | 2025-06-11 14:23:57.029513 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-11 14:23:57.029526 | orchestrator | Wednesday 11 June 2025 14:23:52 +0000 (0:00:00.064) 0:00:05.439 ******** 2025-06-11 14:23:57.029538 | orchestrator | ok: [testbed-manager] 2025-06-11 14:23:57.029550 | orchestrator | 2025-06-11 14:23:57.029563 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-11 14:23:57.029575 | orchestrator | Wednesday 11 June 2025 14:23:52 +0000 (0:00:00.498) 0:00:05.938 ******** 2025-06-11 14:23:57.029588 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:23:57.029600 | orchestrator | 2025-06-11 14:23:57.029613 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-11 14:23:57.029626 | orchestrator | Wednesday 11 June 2025 14:23:53 +0000 (0:00:00.088) 0:00:06.027 ******** 2025-06-11 14:23:57.029638 | orchestrator | changed: [testbed-manager] 2025-06-11 14:23:57.029650 | orchestrator | 2025-06-11 14:23:57.029663 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-11 14:23:57.029675 | orchestrator | Wednesday 11 June 2025 14:23:53 +0000 (0:00:00.528) 0:00:06.556 ******** 2025-06-11 14:23:57.029687 | orchestrator | changed: [testbed-manager] 2025-06-11 14:23:57.029699 | orchestrator | 2025-06-11 14:23:57.029712 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-11 14:23:57.029724 | orchestrator | Wednesday 11 June 2025 14:23:54 +0000 (0:00:01.074) 0:00:07.630 ******** 2025-06-11 14:23:57.029736 | orchestrator | ok: [testbed-manager] 2025-06-11 14:23:57.029749 | orchestrator | 2025-06-11 14:23:57.029761 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-11 14:23:57.029774 | orchestrator | Wednesday 11 June 2025 14:23:55 +0000 (0:00:00.925) 0:00:08.555 ******** 2025-06-11 14:23:57.029787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-11 14:23:57.029799 | orchestrator | 2025-06-11 14:23:57.029811 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-11 14:23:57.029833 | orchestrator | Wednesday 11 June 2025 14:23:55 +0000 (0:00:00.082) 0:00:08.638 ******** 2025-06-11 14:23:57.029846 | orchestrator | changed: [testbed-manager] 2025-06-11 14:23:57.029886 | orchestrator | 2025-06-11 14:23:57.029898 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:23:57.029910 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:23:57.029930 | orchestrator | 2025-06-11 14:23:57.029941 | orchestrator | 2025-06-11 14:23:57.029952 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:23:57.029963 | orchestrator | Wednesday 11 June 2025 14:23:56 +0000 (0:00:01.111) 0:00:09.749 ******** 2025-06-11 14:23:57.029974 | orchestrator | =============================================================================== 2025-06-11 14:23:57.029985 | orchestrator | Gathering Facts --------------------------------------------------------- 3.85s 2025-06-11 14:23:57.029996 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.16s 2025-06-11 14:23:57.030006 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-06-11 14:23:57.030089 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-06-11 14:23:57.030101 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-06-11 14:23:57.030112 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-06-11 14:23:57.030146 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-06-11 14:23:57.030158 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-06-11 14:23:57.030169 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-11 14:23:57.030179 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-06-11 14:23:57.030190 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-06-11 14:23:57.030201 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-11 14:23:57.030212 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-11 14:23:57.247537 | orchestrator | + osism apply sshconfig 2025-06-11 14:23:58.855562 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:23:58.855670 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:23:58.855683 | orchestrator | Registering Redlock._release_script 2025-06-11 14:23:58.910397 | orchestrator | 2025-06-11 14:23:58 | INFO  | Task c610dca5-8f44-4cf2-9e8b-513bc35e33da (sshconfig) was prepared for execution. 2025-06-11 14:23:58.910470 | orchestrator | 2025-06-11 14:23:58 | INFO  | It takes a moment until task c610dca5-8f44-4cf2-9e8b-513bc35e33da (sshconfig) has been started and output is visible here. 2025-06-11 14:24:10.371257 | orchestrator | 2025-06-11 14:24:10.371359 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-11 14:24:10.371371 | orchestrator | 2025-06-11 14:24:10.371378 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-11 14:24:10.371386 | orchestrator | Wednesday 11 June 2025 14:24:02 +0000 (0:00:00.163) 0:00:00.163 ******** 2025-06-11 14:24:10.371392 | orchestrator | ok: [testbed-manager] 2025-06-11 14:24:10.371400 | orchestrator | 2025-06-11 14:24:10.371407 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-11 14:24:10.371414 | orchestrator | Wednesday 11 June 2025 14:24:03 +0000 (0:00:00.551) 0:00:00.714 ******** 2025-06-11 14:24:10.371421 | orchestrator | changed: [testbed-manager] 2025-06-11 14:24:10.371429 | orchestrator | 2025-06-11 14:24:10.371436 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-11 14:24:10.371510 | orchestrator | Wednesday 11 June 2025 14:24:03 +0000 (0:00:00.493) 0:00:01.207 ******** 2025-06-11 14:24:10.371518 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-11 14:24:10.371525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-11 14:24:10.371532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-11 14:24:10.371538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-11 14:24:10.371545 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-11 14:24:10.371551 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-11 14:24:10.371595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-11 14:24:10.371603 | orchestrator | 2025-06-11 14:24:10.371609 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-11 14:24:10.371616 | orchestrator | Wednesday 11 June 2025 14:24:09 +0000 (0:00:05.615) 0:00:06.823 ******** 2025-06-11 14:24:10.371622 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:24:10.371628 | orchestrator | 2025-06-11 14:24:10.371634 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-11 14:24:10.371641 | orchestrator | Wednesday 11 June 2025 14:24:09 +0000 (0:00:00.071) 0:00:06.894 ******** 2025-06-11 14:24:10.371648 | orchestrator | changed: [testbed-manager] 2025-06-11 14:24:10.371654 | orchestrator | 2025-06-11 14:24:10.371661 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:24:10.371669 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:24:10.371676 | orchestrator | 2025-06-11 14:24:10.371683 | orchestrator | 2025-06-11 14:24:10.371688 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:24:10.371694 | orchestrator | Wednesday 11 June 2025 14:24:10 +0000 (0:00:00.593) 0:00:07.488 ******** 2025-06-11 14:24:10.371700 | orchestrator | =============================================================================== 2025-06-11 14:24:10.371706 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.62s 2025-06-11 14:24:10.371712 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-06-11 14:24:10.371718 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-06-11 14:24:10.371724 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-06-11 14:24:10.371731 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-11 14:24:10.622009 | orchestrator | + osism apply known-hosts 2025-06-11 14:24:12.317204 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:24:12.317336 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:24:12.317353 | orchestrator | Registering Redlock._release_script 2025-06-11 14:24:12.374340 | orchestrator | 2025-06-11 14:24:12 | INFO  | Task c4cc714d-e382-4173-8e3a-f57b6033da80 (known-hosts) was prepared for execution. 2025-06-11 14:24:12.374440 | orchestrator | 2025-06-11 14:24:12 | INFO  | It takes a moment until task c4cc714d-e382-4173-8e3a-f57b6033da80 (known-hosts) has been started and output is visible here. 2025-06-11 14:24:28.598137 | orchestrator | 2025-06-11 14:24:28.598230 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-11 14:24:28.598242 | orchestrator | 2025-06-11 14:24:28.598251 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-11 14:24:28.598260 | orchestrator | Wednesday 11 June 2025 14:24:16 +0000 (0:00:00.150) 0:00:00.150 ******** 2025-06-11 14:24:28.598268 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-11 14:24:28.598276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-11 14:24:28.598283 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-11 14:24:28.598291 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-11 14:24:28.598298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-11 14:24:28.598305 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-11 14:24:28.598312 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-11 14:24:28.598319 | orchestrator | 2025-06-11 14:24:28.598326 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-11 14:24:28.598334 | orchestrator | Wednesday 11 June 2025 14:24:21 +0000 (0:00:05.740) 0:00:05.890 ******** 2025-06-11 14:24:28.598343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-11 14:24:28.598370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-11 14:24:28.598377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-11 14:24:28.598385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-11 14:24:28.598399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-11 14:24:28.598406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-11 14:24:28.598413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-11 14:24:28.598420 | orchestrator | 2025-06-11 14:24:28.598427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598434 | orchestrator | Wednesday 11 June 2025 14:24:22 +0000 (0:00:00.181) 0:00:06.071 ******** 2025-06-11 14:24:28.598441 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7aFn+UurbZltitdT3o7ppzKct4edwW6dbE0N8XFZLZRBVPmwH6MrNkjubhN7ls3l4pxjKlSbICStWAAfxDOag=) 2025-06-11 14:24:28.598453 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCibHgmHcImX8LBOLwILF0YvNqVPIw1hde54szlZ8TdCOaiz9tOw8hHaIyEanS3+5irvKwlqqsyo3I9oUL1TPW0LiV+kMg4OYVmOpNOgzQTJsJREURik1VVVXpPpx0eMvHDfwNSVnetX2DwVUe2bTCIaij06a7h7dESHiD0MZk/xosCbiyTQhX0Zvzhr3PGA797AtQEAiJBDYivNdoQYxvqTUx4+NuIZTZXiLCFhJag0wMwCqGFXIU8aWQKe+qXavkwMWShguzDiEV2jzRNWybg1czxNEu3j7NSIxrUE1wGRbxXENNdUuS+wV3L1d6QUx63aA8J1RHAdXGZU2YmapTS6cX376+h2ZGU60iq5/8i13QxDdkuxKNuyeCoh9nDnn8IaUq1yO7aey+4vqtplK6LTrx0b+wx3J28C9xQ/zde0g56psMGi/PMgZTWicdUSaWvODtGBMYuQEtwMQ0bnGjBIJMTVmXPxjF8vrp9BOVXpMQJsc0LAFhOZ1vNS1eZ+Ls=) 2025-06-11 14:24:28.598462 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG0BK8zkCTS2tABFik9JRNMP7DayIXzfg7l7H7Vj59kl) 2025-06-11 14:24:28.598470 | orchestrator | 2025-06-11 14:24:28.598477 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598501 | orchestrator | Wednesday 11 June 2025 14:24:23 +0000 (0:00:01.185) 0:00:07.257 ******** 2025-06-11 14:24:28.598524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+sZMrKoEKuTp/F5BHOGh0uOfrOP8tIUhLFKrxpWgROC4skVHrL7XogsBR4KHWo+Z74ZMOV9tpMSmylWK7DJ+NNxdZFgMC/ozByDcBrm7D4GUyaDOeREcTRc+YAorXwteIsBhaBScjCrf4Wup5L7dlgik4j5epyL3lMWN8he4mZXz2TNlICZpH2wp8ZB2nDh89rGpXnWZhJNJr8dCQxV1zoCEEwvrdZ8L4FF9ExOyo0BZnkEXYoOd7sI3ecmJi2kbwBeVLKh4gMZvuZsT7C9FRnoyV5467mke+mVvOO5iBzNdyMo73hOCb4jUIPYILyP93vfO5bslcFsN9X9BFtqco+TbpbhRGibbmWgJsVd8Nj+vfQZvNdQV8ThK5yhNhNCcJ8KeRPwePtgqwpiI5SIxADr/WqaZjJ0DPL08P+/kmx3fKFMnCJx5bNcoxPdSGN55B8cdiQ0cgW9S8R/gSgP4Oj+8qaI3CBXsmUobu+R8va1gG4hekVW4prT4Zz0DL3T8=) 2025-06-11 14:24:28.598532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ0Zv90sXuW0f0YsYkey3F4Oyo9/qsLZr7tDL0NQnUd/ddF1k/EzCO6Ohf2oTqpR5HFmoKdfWTtiYoSO+qrdlNw=) 2025-06-11 14:24:28.598539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKv1HAZGpuHgCLRdp5VOnSUp7BDu1imdrgvSw7ZKYQN) 2025-06-11 14:24:28.598547 | orchestrator | 2025-06-11 14:24:28.598560 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598567 | orchestrator | Wednesday 11 June 2025 14:24:24 +0000 (0:00:01.038) 0:00:08.295 ******** 2025-06-11 14:24:28.598574 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTIBQNaWBtWLkoj+G+Y9Fp/f/ddUDFamzPpJ1OnKjCOfjAHyz0LfGQWTihttWZ721zzmD8wccV9nDofX9Lkd+jTwK4r0NOpvnKTc6HLw2gXHTzO/gR8KIkY59xJNcVkV8mbf4eibsNFFSPsM3rIoFkSAXpVZW0FR5AeK/tttQ6yvwFvfQMj3mvfaPoOvUcka3fLmVMYXuo/Xf4Sn6yzObPyBKNtJbQvGGOj1mvVGQYUI9e52PlFJJbuwgrVFyh58HjABq40UPON5NJDacPIDlnrMNQVK6dvMoJL7ZXsdpj/HUvJkJFDljEUQ/8GhlC4yrgh6+I0UJmjf4lsPb1oehnr+k63rm/u8UEGTfzxWc+nBcR2B86vAziZ9nCoh24kwUgmUXT7MIoYiYdba0WIUek/88Xeu9l1FrHhhY4DfFNFQo9Vp8zQsNRJ11slRko/tTZwN6FjjuorFxeM4YpKg3/0QDLX/GJAzN4lGF2CSHyNBbJQRWi8jPAJRGsopa2hQc=) 2025-06-11 14:24:28.598582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJrlaDgQGg+m6Cl7DK2X2s2BjX7Yu9KTC/yi70u7FDWG) 2025-06-11 14:24:28.598589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBy0e5UAyjn5jjeJposi9Aj65FCubJojs3K3iyW1jRhPS4Wz3C4Zu4E1K3Bo6av5ZDVxCXsP3AqzEu7Y4F6Ivw=) 2025-06-11 14:24:28.598596 | orchestrator | 2025-06-11 14:24:28.598603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598652 | orchestrator | Wednesday 11 June 2025 14:24:25 +0000 (0:00:01.063) 0:00:09.358 ******** 2025-06-11 14:24:28.598660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCax+/Hiwfbmo986qWDXJ5st4BqjUzGB7Sliu2Ygw3XSyf+Lj9jviwdNcGBQF9fjpK2Rm50tkvPsxQq2UqEFlzs7xu1MRCGN4+DkoeCYyhDB9JQloA3PMR8+gDNHwBRTMDhQWrJlzrd1cB8Up1vAZAqJo8QUDNdykQfNuYouPYd5l1jIUtnxXOc5TjNt+SFK7TvqA7GkGLBhLezrC3j1vg7uhBOuPmniYCE2K0TGk89FbIfA5ouyCymCGp0oOM0PgwcUjSpDR/QIsL3F7Gc6X/BJCLHn6HJCCnhUgRHR3oSCpZ0st4la3VbJ3d7Kd9uk+RCXQbs32ysGA82A4bfdoDrnzQRoLuWCR/VDBg7gQDISkmoNCkFPNlzVKpBHNP8jz9g1x09J/ZM33wqLD/GiMfjshVWIuPlYuFq4txNdaOS0nnCc/ko5GlXqbzV8OjZAX8Yh4wnYl9llq0ukaTYGGXSZKNrWKQN1m5JCIkgkD1CFEQgmpsOH3wqxtbu5n+VqDE=) 2025-06-11 14:24:28.598667 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOp9iRt1N4sPGXorUI5U2+LRDuO9bmsqhxTnZaHyiRw935MLSD13VNVvRE9Kl8kEqM6mRoo0T2iX9pYQetQx/Q=) 2025-06-11 14:24:28.598674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIAEQn4IwcK8q6WBkYFn2MDmKV4Wic6AEsOX+hlbkqLK) 2025-06-11 14:24:28.598681 | orchestrator | 2025-06-11 14:24:28.598688 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598695 | orchestrator | Wednesday 11 June 2025 14:24:26 +0000 (0:00:01.063) 0:00:10.421 ******** 2025-06-11 14:24:28.598702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCypTBQ2fXgqVW1vzdrRo9IESKCTlhJgmrt8et6TmJ1ls920d6aE5fTr1F1o2F7QPyLd9OIo7WDfm+AiOiH4Mo+k9ziO/nYYmp9DYQdo2T81jbQBWDwOKvUg6s2VpBLH6evFQ1v+WYwVg/Ye5QiphmLl6CF6voeL6GDtAisrsDTkL1IIxoIXhP8f646vFWbJtowqmdDv/RfzXL7PEsJ27npFRxW2jf6f99U44KysuV3HaUt9T2hxj/wU+290/cPT139clMljG26ooUEDymTFpRViy1ULBMJCYoY+rRb1sYjvJsQe/b1ur06ZL5t0apphUUQvDGxTGKx+c0X2rcaVl/E9wzzsEcaKz2bcuHpt9V5bbB5zzBUsP4lfyqRMeZ4ZrcJmXKOTh5JJBF49F2mtJ+e58P9HLhnj+nYFmo0WpKIONXoV4Vf1mfI0/e8o0Hj0dhGR6fb1ZiWx64+nWHtbu5wlq71qZ0dBJAd2ase5Fc6SiW+3knaCawjbzUiEmIMdH0=) 2025-06-11 14:24:28.598709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGD9H4T/5dJ073SB8xagZZe+eoUnBg0c43QUt2jtmbzLxdsJ9vKDtzj08gjxShmrIB9ZGytsMAf8yhkRK0govKw=) 2025-06-11 14:24:28.598716 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB3saJ4WppdMEpf9E2NhtwSd2u7G0xOPGcgqhsJFJ9NW) 2025-06-11 14:24:28.598723 | orchestrator | 2025-06-11 14:24:28.598730 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:28.598737 | orchestrator | Wednesday 11 June 2025 14:24:27 +0000 (0:00:01.044) 0:00:11.466 ******** 2025-06-11 14:24:28.598752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMuMpSaRp22EBKf79J3GRsztzWSBVFUoybAE6UwAcNEi) 2025-06-11 14:24:39.228219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTIgiKKHzmQaR+2lzJ0nGecg+/6NpP3cXCPBAVQGldQcI70xZfJH+r6dxllOcgsiQqILwzgG1ODpagxsBzxtJ4hppv30bn96ta0FYoom42X+GSb/KhfaomwTil9QdqrYoOOcEvNP/3klmBuzSPJDb5Sl+7/M89NlC4WXDwRw4mpWPvALNA/GLwHkTZ/sXooEbJl6t2f9AvwoQRcq9IM3RXrAoo+RBGgGlUrdxQpJesvyXSqj6hFOgSZ2bfmlbXMbFxwrmQww5qAZ44G21VFVB5QPKr3NoMIVagDv6g6lUzgx5e54pTts70pwWdiSWMDjX+C8yTUQAXrjR3tSrMw2KsaxeQF+P/3k13gpJuYKnfUm80ZV2T728F56znL2N0ZfqYY8AlHn6O7KkN/zttMg9w/F6e7i2I6oLpK0ycrGWZaWNj1CKsthS3/G151L62upLElakAjkNySX7eagCw5fK0MUvXbfwRDmuqZqAD7XENcr4Xa+DHlNJeOBdKhCwV/4k=) 2025-06-11 14:24:39.228332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0UL6gpYG/00C4x86eEo8J3Qn9NTtn9Oc0X2J4abTjUw8lLVqlL+pim2Z+oeVcoGqrz2w58J83QsZ79I2HePg8=) 2025-06-11 14:24:39.228351 | orchestrator | 2025-06-11 14:24:39.228363 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:39.228374 | orchestrator | Wednesday 11 June 2025 14:24:28 +0000 (0:00:01.013) 0:00:12.480 ******** 2025-06-11 14:24:39.228384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPmpWAH6oBpUwn/zsa7AoEvuPEA38P0NAcqThsZpqKbr) 2025-06-11 14:24:39.228396 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CPdbTpizRFGAnd7i8YtQFkEf5TPhhjIXhAnaStM7FXNRqhJq2aYok+q7bsI76rhIE6nEPM2Hnp/rsjVQPg+8IFgIPMDCbjWOuwwZ9ao0japEsXSY8xBlfWK2JovpKa9mF8d0Zv+6NgpvztajpVLcYSRyhANdMJJuOmPHgQ+zNjz54t2OFgfmL/jTHZUbWIoRY6L4jy8dwGtkmhKT8hIfZY3Rvi41ILPnlCfZP6Z608w3OEOt8WouMNP2xijbrTwklGZV1takZ3w1Ju1TgrK3lwp6zONp770n6jwj2J88pghoDXj6WVYNyTGcdpU/ebGGGmTnCZEj1QihSCsZ5QJFLJC4LUPDJ3ftbVrLu7mRhw1yaaAqXX7dr0p3tWdZuVK/CSYZriYp7vxknLrs4uiFZcXmdf9gQ6ycXd1gE0zqbFiSlocr1Y0WPrWpRe61UO0rfblGOihq5Ace1DfAVOHi5tduEIEoJLrV9yk8bvcCCcmU39MIsY7G/nFnLMGjcKc=) 2025-06-11 14:24:39.228418 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGyEI9wQEY5Pka5WQYa1mahLa5HSkAwWkT6avZ9EZ8G6TGYR+Lhc4eYk5YylpRcOYX/LfkXsY9oio6nQ3rjFmL8=) 2025-06-11 14:24:39.228428 | orchestrator | 2025-06-11 14:24:39.228438 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-11 14:24:39.228449 | orchestrator | Wednesday 11 June 2025 14:24:29 +0000 (0:00:01.014) 0:00:13.495 ******** 2025-06-11 14:24:39.228459 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-11 14:24:39.228469 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-11 14:24:39.228479 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-11 14:24:39.228489 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-11 14:24:39.228498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-11 14:24:39.228508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-11 14:24:39.228517 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-11 14:24:39.228527 | orchestrator | 2025-06-11 14:24:39.228537 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-11 14:24:39.228547 | orchestrator | Wednesday 11 June 2025 14:24:34 +0000 (0:00:05.242) 0:00:18.737 ******** 2025-06-11 14:24:39.228558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-11 14:24:39.228570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-11 14:24:39.228579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-11 14:24:39.228606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-11 14:24:39.228618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-11 14:24:39.228634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-11 14:24:39.228651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-11 14:24:39.228666 | orchestrator | 2025-06-11 14:24:39.228701 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:39.228718 | orchestrator | Wednesday 11 June 2025 14:24:35 +0000 (0:00:00.178) 0:00:18.915 ******** 2025-06-11 14:24:39.228731 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7aFn+UurbZltitdT3o7ppzKct4edwW6dbE0N8XFZLZRBVPmwH6MrNkjubhN7ls3l4pxjKlSbICStWAAfxDOag=) 2025-06-11 14:24:39.228747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCibHgmHcImX8LBOLwILF0YvNqVPIw1hde54szlZ8TdCOaiz9tOw8hHaIyEanS3+5irvKwlqqsyo3I9oUL1TPW0LiV+kMg4OYVmOpNOgzQTJsJREURik1VVVXpPpx0eMvHDfwNSVnetX2DwVUe2bTCIaij06a7h7dESHiD0MZk/xosCbiyTQhX0Zvzhr3PGA797AtQEAiJBDYivNdoQYxvqTUx4+NuIZTZXiLCFhJag0wMwCqGFXIU8aWQKe+qXavkwMWShguzDiEV2jzRNWybg1czxNEu3j7NSIxrUE1wGRbxXENNdUuS+wV3L1d6QUx63aA8J1RHAdXGZU2YmapTS6cX376+h2ZGU60iq5/8i13QxDdkuxKNuyeCoh9nDnn8IaUq1yO7aey+4vqtplK6LTrx0b+wx3J28C9xQ/zde0g56psMGi/PMgZTWicdUSaWvODtGBMYuQEtwMQ0bnGjBIJMTVmXPxjF8vrp9BOVXpMQJsc0LAFhOZ1vNS1eZ+Ls=) 2025-06-11 14:24:39.228764 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG0BK8zkCTS2tABFik9JRNMP7DayIXzfg7l7H7Vj59kl) 2025-06-11 14:24:39.228782 | orchestrator | 2025-06-11 14:24:39.228799 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:39.228815 | orchestrator | Wednesday 11 June 2025 14:24:36 +0000 (0:00:01.033) 0:00:19.949 ******** 2025-06-11 14:24:39.228830 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ0Zv90sXuW0f0YsYkey3F4Oyo9/qsLZr7tDL0NQnUd/ddF1k/EzCO6Ohf2oTqpR5HFmoKdfWTtiYoSO+qrdlNw=) 2025-06-11 14:24:39.228867 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+sZMrKoEKuTp/F5BHOGh0uOfrOP8tIUhLFKrxpWgROC4skVHrL7XogsBR4KHWo+Z74ZMOV9tpMSmylWK7DJ+NNxdZFgMC/ozByDcBrm7D4GUyaDOeREcTRc+YAorXwteIsBhaBScjCrf4Wup5L7dlgik4j5epyL3lMWN8he4mZXz2TNlICZpH2wp8ZB2nDh89rGpXnWZhJNJr8dCQxV1zoCEEwvrdZ8L4FF9ExOyo0BZnkEXYoOd7sI3ecmJi2kbwBeVLKh4gMZvuZsT7C9FRnoyV5467mke+mVvOO5iBzNdyMo73hOCb4jUIPYILyP93vfO5bslcFsN9X9BFtqco+TbpbhRGibbmWgJsVd8Nj+vfQZvNdQV8ThK5yhNhNCcJ8KeRPwePtgqwpiI5SIxADr/WqaZjJ0DPL08P+/kmx3fKFMnCJx5bNcoxPdSGN55B8cdiQ0cgW9S8R/gSgP4Oj+8qaI3CBXsmUobu+R8va1gG4hekVW4prT4Zz0DL3T8=) 2025-06-11 14:24:39.228877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKv1HAZGpuHgCLRdp5VOnSUp7BDu1imdrgvSw7ZKYQN) 2025-06-11 14:24:39.228887 | orchestrator | 2025-06-11 14:24:39.228897 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:39.228907 | orchestrator | Wednesday 11 June 2025 14:24:37 +0000 (0:00:01.055) 0:00:21.004 ******** 2025-06-11 14:24:39.228916 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBy0e5UAyjn5jjeJposi9Aj65FCubJojs3K3iyW1jRhPS4Wz3C4Zu4E1K3Bo6av5ZDVxCXsP3AqzEu7Y4F6Ivw=) 2025-06-11 14:24:39.228943 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTIBQNaWBtWLkoj+G+Y9Fp/f/ddUDFamzPpJ1OnKjCOfjAHyz0LfGQWTihttWZ721zzmD8wccV9nDofX9Lkd+jTwK4r0NOpvnKTc6HLw2gXHTzO/gR8KIkY59xJNcVkV8mbf4eibsNFFSPsM3rIoFkSAXpVZW0FR5AeK/tttQ6yvwFvfQMj3mvfaPoOvUcka3fLmVMYXuo/Xf4Sn6yzObPyBKNtJbQvGGOj1mvVGQYUI9e52PlFJJbuwgrVFyh58HjABq40UPON5NJDacPIDlnrMNQVK6dvMoJL7ZXsdpj/HUvJkJFDljEUQ/8GhlC4yrgh6+I0UJmjf4lsPb1oehnr+k63rm/u8UEGTfzxWc+nBcR2B86vAziZ9nCoh24kwUgmUXT7MIoYiYdba0WIUek/88Xeu9l1FrHhhY4DfFNFQo9Vp8zQsNRJ11slRko/tTZwN6FjjuorFxeM4YpKg3/0QDLX/GJAzN4lGF2CSHyNBbJQRWi8jPAJRGsopa2hQc=) 2025-06-11 14:24:39.228954 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJrlaDgQGg+m6Cl7DK2X2s2BjX7Yu9KTC/yi70u7FDWG) 2025-06-11 14:24:39.228964 | orchestrator | 2025-06-11 14:24:39.228974 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:39.228984 | orchestrator | Wednesday 11 June 2025 14:24:38 +0000 (0:00:01.091) 0:00:22.096 ******** 2025-06-11 14:24:39.228993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIAEQn4IwcK8q6WBkYFn2MDmKV4Wic6AEsOX+hlbkqLK) 2025-06-11 14:24:39.229017 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCax+/Hiwfbmo986qWDXJ5st4BqjUzGB7Sliu2Ygw3XSyf+Lj9jviwdNcGBQF9fjpK2Rm50tkvPsxQq2UqEFlzs7xu1MRCGN4+DkoeCYyhDB9JQloA3PMR8+gDNHwBRTMDhQWrJlzrd1cB8Up1vAZAqJo8QUDNdykQfNuYouPYd5l1jIUtnxXOc5TjNt+SFK7TvqA7GkGLBhLezrC3j1vg7uhBOuPmniYCE2K0TGk89FbIfA5ouyCymCGp0oOM0PgwcUjSpDR/QIsL3F7Gc6X/BJCLHn6HJCCnhUgRHR3oSCpZ0st4la3VbJ3d7Kd9uk+RCXQbs32ysGA82A4bfdoDrnzQRoLuWCR/VDBg7gQDISkmoNCkFPNlzVKpBHNP8jz9g1x09J/ZM33wqLD/GiMfjshVWIuPlYuFq4txNdaOS0nnCc/ko5GlXqbzV8OjZAX8Yh4wnYl9llq0ukaTYGGXSZKNrWKQN1m5JCIkgkD1CFEQgmpsOH3wqxtbu5n+VqDE=) 2025-06-11 14:24:43.263599 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOp9iRt1N4sPGXorUI5U2+LRDuO9bmsqhxTnZaHyiRw935MLSD13VNVvRE9Kl8kEqM6mRoo0T2iX9pYQetQx/Q=) 2025-06-11 14:24:43.263711 | orchestrator | 2025-06-11 14:24:43.263728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:43.263741 | orchestrator | Wednesday 11 June 2025 14:24:39 +0000 (0:00:01.010) 0:00:23.106 ******** 2025-06-11 14:24:43.263753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB3saJ4WppdMEpf9E2NhtwSd2u7G0xOPGcgqhsJFJ9NW) 2025-06-11 14:24:43.263769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCypTBQ2fXgqVW1vzdrRo9IESKCTlhJgmrt8et6TmJ1ls920d6aE5fTr1F1o2F7QPyLd9OIo7WDfm+AiOiH4Mo+k9ziO/nYYmp9DYQdo2T81jbQBWDwOKvUg6s2VpBLH6evFQ1v+WYwVg/Ye5QiphmLl6CF6voeL6GDtAisrsDTkL1IIxoIXhP8f646vFWbJtowqmdDv/RfzXL7PEsJ27npFRxW2jf6f99U44KysuV3HaUt9T2hxj/wU+290/cPT139clMljG26ooUEDymTFpRViy1ULBMJCYoY+rRb1sYjvJsQe/b1ur06ZL5t0apphUUQvDGxTGKx+c0X2rcaVl/E9wzzsEcaKz2bcuHpt9V5bbB5zzBUsP4lfyqRMeZ4ZrcJmXKOTh5JJBF49F2mtJ+e58P9HLhnj+nYFmo0WpKIONXoV4Vf1mfI0/e8o0Hj0dhGR6fb1ZiWx64+nWHtbu5wlq71qZ0dBJAd2ase5Fc6SiW+3knaCawjbzUiEmIMdH0=) 2025-06-11 14:24:43.263783 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGD9H4T/5dJ073SB8xagZZe+eoUnBg0c43QUt2jtmbzLxdsJ9vKDtzj08gjxShmrIB9ZGytsMAf8yhkRK0govKw=) 2025-06-11 14:24:43.263794 | orchestrator | 2025-06-11 14:24:43.263805 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:43.263816 | orchestrator | Wednesday 11 June 2025 14:24:40 +0000 (0:00:01.035) 0:00:24.142 ******** 2025-06-11 14:24:43.263826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0UL6gpYG/00C4x86eEo8J3Qn9NTtn9Oc0X2J4abTjUw8lLVqlL+pim2Z+oeVcoGqrz2w58J83QsZ79I2HePg8=) 2025-06-11 14:24:43.263910 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTIgiKKHzmQaR+2lzJ0nGecg+/6NpP3cXCPBAVQGldQcI70xZfJH+r6dxllOcgsiQqILwzgG1ODpagxsBzxtJ4hppv30bn96ta0FYoom42X+GSb/KhfaomwTil9QdqrYoOOcEvNP/3klmBuzSPJDb5Sl+7/M89NlC4WXDwRw4mpWPvALNA/GLwHkTZ/sXooEbJl6t2f9AvwoQRcq9IM3RXrAoo+RBGgGlUrdxQpJesvyXSqj6hFOgSZ2bfmlbXMbFxwrmQww5qAZ44G21VFVB5QPKr3NoMIVagDv6g6lUzgx5e54pTts70pwWdiSWMDjX+C8yTUQAXrjR3tSrMw2KsaxeQF+P/3k13gpJuYKnfUm80ZV2T728F56znL2N0ZfqYY8AlHn6O7KkN/zttMg9w/F6e7i2I6oLpK0ycrGWZaWNj1CKsthS3/G151L62upLElakAjkNySX7eagCw5fK0MUvXbfwRDmuqZqAD7XENcr4Xa+DHlNJeOBdKhCwV/4k=) 2025-06-11 14:24:43.263949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMuMpSaRp22EBKf79J3GRsztzWSBVFUoybAE6UwAcNEi) 2025-06-11 14:24:43.263962 | orchestrator | 2025-06-11 14:24:43.263972 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-11 14:24:43.263983 | orchestrator | Wednesday 11 June 2025 14:24:41 +0000 (0:00:01.017) 0:00:25.159 ******** 2025-06-11 14:24:43.263995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CPdbTpizRFGAnd7i8YtQFkEf5TPhhjIXhAnaStM7FXNRqhJq2aYok+q7bsI76rhIE6nEPM2Hnp/rsjVQPg+8IFgIPMDCbjWOuwwZ9ao0japEsXSY8xBlfWK2JovpKa9mF8d0Zv+6NgpvztajpVLcYSRyhANdMJJuOmPHgQ+zNjz54t2OFgfmL/jTHZUbWIoRY6L4jy8dwGtkmhKT8hIfZY3Rvi41ILPnlCfZP6Z608w3OEOt8WouMNP2xijbrTwklGZV1takZ3w1Ju1TgrK3lwp6zONp770n6jwj2J88pghoDXj6WVYNyTGcdpU/ebGGGmTnCZEj1QihSCsZ5QJFLJC4LUPDJ3ftbVrLu7mRhw1yaaAqXX7dr0p3tWdZuVK/CSYZriYp7vxknLrs4uiFZcXmdf9gQ6ycXd1gE0zqbFiSlocr1Y0WPrWpRe61UO0rfblGOihq5Ace1DfAVOHi5tduEIEoJLrV9yk8bvcCCcmU39MIsY7G/nFnLMGjcKc=) 2025-06-11 14:24:43.264006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGyEI9wQEY5Pka5WQYa1mahLa5HSkAwWkT6avZ9EZ8G6TGYR+Lhc4eYk5YylpRcOYX/LfkXsY9oio6nQ3rjFmL8=) 2025-06-11 14:24:43.264018 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPmpWAH6oBpUwn/zsa7AoEvuPEA38P0NAcqThsZpqKbr) 2025-06-11 14:24:43.264029 | orchestrator | 2025-06-11 14:24:43.264039 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-11 14:24:43.264050 | orchestrator | Wednesday 11 June 2025 14:24:42 +0000 (0:00:01.019) 0:00:26.179 ******** 2025-06-11 14:24:43.264062 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-11 14:24:43.264073 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-11 14:24:43.264084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-11 14:24:43.264095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-11 14:24:43.264126 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-11 14:24:43.264140 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-11 14:24:43.264152 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-11 14:24:43.264164 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:24:43.264176 | orchestrator | 2025-06-11 14:24:43.264188 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-11 14:24:43.264200 | orchestrator | Wednesday 11 June 2025 14:24:42 +0000 (0:00:00.159) 0:00:26.339 ******** 2025-06-11 14:24:43.264213 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:24:43.264225 | orchestrator | 2025-06-11 14:24:43.264254 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-11 14:24:43.264266 | orchestrator | Wednesday 11 June 2025 14:24:42 +0000 (0:00:00.060) 0:00:26.399 ******** 2025-06-11 14:24:43.264278 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:24:43.264290 | orchestrator | 2025-06-11 14:24:43.264302 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-11 14:24:43.264314 | orchestrator | Wednesday 11 June 2025 14:24:42 +0000 (0:00:00.055) 0:00:26.455 ******** 2025-06-11 14:24:43.264325 | orchestrator | changed: [testbed-manager] 2025-06-11 14:24:43.264338 | orchestrator | 2025-06-11 14:24:43.264350 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:24:43.264371 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:24:43.264385 | orchestrator | 2025-06-11 14:24:43.264396 | orchestrator | 2025-06-11 14:24:43.264409 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:24:43.264421 | orchestrator | Wednesday 11 June 2025 14:24:43 +0000 (0:00:00.459) 0:00:26.915 ******** 2025-06-11 14:24:43.264432 | orchestrator | =============================================================================== 2025-06-11 14:24:43.264443 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.74s 2025-06-11 14:24:43.264454 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2025-06-11 14:24:43.264465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-06-11 14:24:43.264476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-11 14:24:43.264486 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-11 14:24:43.264497 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-11 14:24:43.264508 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-11 14:24:43.264518 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-11 14:24:43.264529 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-11 14:24:43.264540 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-11 14:24:43.264551 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-11 14:24:43.264561 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-11 14:24:43.264572 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-11 14:24:43.264583 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-11 14:24:43.264593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-11 14:24:43.264604 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-11 14:24:43.264615 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2025-06-11 14:24:43.264626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-06-11 14:24:43.264637 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-06-11 14:24:43.264648 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-11 14:24:43.488003 | orchestrator | + osism apply squid 2025-06-11 14:24:45.109174 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:24:45.109309 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:24:45.109336 | orchestrator | Registering Redlock._release_script 2025-06-11 14:24:45.168107 | orchestrator | 2025-06-11 14:24:45 | INFO  | Task d41d9884-1608-4d41-b35e-18590a3f0cb4 (squid) was prepared for execution. 2025-06-11 14:24:45.168181 | orchestrator | 2025-06-11 14:24:45 | INFO  | It takes a moment until task d41d9884-1608-4d41-b35e-18590a3f0cb4 (squid) has been started and output is visible here. 2025-06-11 14:26:39.556201 | orchestrator | 2025-06-11 14:26:39.556343 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-11 14:26:39.556362 | orchestrator | 2025-06-11 14:26:39.556407 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-11 14:26:39.556421 | orchestrator | Wednesday 11 June 2025 14:24:49 +0000 (0:00:00.196) 0:00:00.196 ******** 2025-06-11 14:26:39.556433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-11 14:26:39.556445 | orchestrator | 2025-06-11 14:26:39.556481 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-11 14:26:39.556493 | orchestrator | Wednesday 11 June 2025 14:24:49 +0000 (0:00:00.090) 0:00:00.286 ******** 2025-06-11 14:26:39.556504 | orchestrator | ok: [testbed-manager] 2025-06-11 14:26:39.556516 | orchestrator | 2025-06-11 14:26:39.556527 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-11 14:26:39.556538 | orchestrator | Wednesday 11 June 2025 14:24:50 +0000 (0:00:01.471) 0:00:01.758 ******** 2025-06-11 14:26:39.556549 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-11 14:26:39.556560 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-11 14:26:39.556570 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-11 14:26:39.556582 | orchestrator | 2025-06-11 14:26:39.556592 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-11 14:26:39.556603 | orchestrator | Wednesday 11 June 2025 14:24:51 +0000 (0:00:01.108) 0:00:02.866 ******** 2025-06-11 14:26:39.556614 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-11 14:26:39.556625 | orchestrator | 2025-06-11 14:26:39.556635 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-11 14:26:39.556646 | orchestrator | Wednesday 11 June 2025 14:24:52 +0000 (0:00:01.044) 0:00:03.911 ******** 2025-06-11 14:26:39.556656 | orchestrator | ok: [testbed-manager] 2025-06-11 14:26:39.556667 | orchestrator | 2025-06-11 14:26:39.556678 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-11 14:26:39.556689 | orchestrator | Wednesday 11 June 2025 14:24:53 +0000 (0:00:00.356) 0:00:04.268 ******** 2025-06-11 14:26:39.556701 | orchestrator | changed: [testbed-manager] 2025-06-11 14:26:39.556713 | orchestrator | 2025-06-11 14:26:39.556725 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-11 14:26:39.556736 | orchestrator | Wednesday 11 June 2025 14:24:54 +0000 (0:00:00.940) 0:00:05.209 ******** 2025-06-11 14:26:39.556748 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-11 14:26:39.556760 | orchestrator | ok: [testbed-manager] 2025-06-11 14:26:39.556773 | orchestrator | 2025-06-11 14:26:39.556807 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-11 14:26:39.556820 | orchestrator | Wednesday 11 June 2025 14:25:26 +0000 (0:00:31.875) 0:00:37.085 ******** 2025-06-11 14:26:39.556832 | orchestrator | changed: [testbed-manager] 2025-06-11 14:26:39.556845 | orchestrator | 2025-06-11 14:26:39.556857 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-11 14:26:39.556870 | orchestrator | Wednesday 11 June 2025 14:25:38 +0000 (0:00:12.514) 0:00:49.600 ******** 2025-06-11 14:26:39.556882 | orchestrator | Pausing for 60 seconds 2025-06-11 14:26:39.556894 | orchestrator | changed: [testbed-manager] 2025-06-11 14:26:39.556906 | orchestrator | 2025-06-11 14:26:39.556917 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-11 14:26:39.556929 | orchestrator | Wednesday 11 June 2025 14:26:38 +0000 (0:01:00.083) 0:01:49.683 ******** 2025-06-11 14:26:39.556942 | orchestrator | ok: [testbed-manager] 2025-06-11 14:26:39.556954 | orchestrator | 2025-06-11 14:26:39.556966 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-11 14:26:39.556978 | orchestrator | Wednesday 11 June 2025 14:26:38 +0000 (0:00:00.061) 0:01:49.744 ******** 2025-06-11 14:26:39.556990 | orchestrator | changed: [testbed-manager] 2025-06-11 14:26:39.557002 | orchestrator | 2025-06-11 14:26:39.557014 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:26:39.557026 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:26:39.557038 | orchestrator | 2025-06-11 14:26:39.557050 | orchestrator | 2025-06-11 14:26:39.557061 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:26:39.557072 | orchestrator | Wednesday 11 June 2025 14:26:39 +0000 (0:00:00.599) 0:01:50.344 ******** 2025-06-11 14:26:39.557091 | orchestrator | =============================================================================== 2025-06-11 14:26:39.557102 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-11 14:26:39.557114 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.88s 2025-06-11 14:26:39.557124 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.51s 2025-06-11 14:26:39.557155 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.47s 2025-06-11 14:26:39.557166 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.11s 2025-06-11 14:26:39.557177 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-06-11 14:26:39.557188 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2025-06-11 14:26:39.557199 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-06-11 14:26:39.557209 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-06-11 14:26:39.557220 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-06-11 14:26:39.557230 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-06-11 14:26:39.755494 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 14:26:39.755952 | orchestrator | ++ semver latest 9.0.0 2025-06-11 14:26:39.805127 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-11 14:26:39.805207 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 14:26:39.805865 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-11 14:26:41.418622 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:26:41.418742 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:26:41.418834 | orchestrator | Registering Redlock._release_script 2025-06-11 14:26:41.494546 | orchestrator | 2025-06-11 14:26:41 | INFO  | Task 2ffc8653-9255-4799-991d-c33c619b6aa9 (operator) was prepared for execution. 2025-06-11 14:26:41.494695 | orchestrator | 2025-06-11 14:26:41 | INFO  | It takes a moment until task 2ffc8653-9255-4799-991d-c33c619b6aa9 (operator) has been started and output is visible here. 2025-06-11 14:26:57.866409 | orchestrator | 2025-06-11 14:26:57.866544 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-11 14:26:57.866573 | orchestrator | 2025-06-11 14:26:57.866586 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:26:57.866597 | orchestrator | Wednesday 11 June 2025 14:26:45 +0000 (0:00:00.145) 0:00:00.145 ******** 2025-06-11 14:26:57.866609 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:26:57.866621 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:26:57.866632 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:26:57.866643 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:26:57.866654 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:26:57.866665 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:26:57.866676 | orchestrator | 2025-06-11 14:26:57.866687 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-11 14:26:57.866698 | orchestrator | Wednesday 11 June 2025 14:26:48 +0000 (0:00:03.261) 0:00:03.406 ******** 2025-06-11 14:26:57.866709 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:26:57.866720 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:26:57.866731 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:26:57.866741 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:26:57.866752 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:26:57.866763 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:26:57.866842 | orchestrator | 2025-06-11 14:26:57.866860 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-11 14:26:57.866871 | orchestrator | 2025-06-11 14:26:57.866882 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-11 14:26:57.866893 | orchestrator | Wednesday 11 June 2025 14:26:49 +0000 (0:00:00.800) 0:00:04.207 ******** 2025-06-11 14:26:57.866904 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:26:57.866915 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:26:57.866952 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:26:57.866966 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:26:57.866978 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:26:57.866990 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:26:57.867002 | orchestrator | 2025-06-11 14:26:57.867014 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-11 14:26:57.867026 | orchestrator | Wednesday 11 June 2025 14:26:49 +0000 (0:00:00.201) 0:00:04.408 ******** 2025-06-11 14:26:57.867038 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:26:57.867050 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:26:57.867062 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:26:57.867073 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:26:57.867086 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:26:57.867097 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:26:57.867110 | orchestrator | 2025-06-11 14:26:57.867122 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-11 14:26:57.867135 | orchestrator | Wednesday 11 June 2025 14:26:49 +0000 (0:00:00.170) 0:00:04.579 ******** 2025-06-11 14:26:57.867147 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.867160 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.867173 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:57.867185 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.867197 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.867209 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.867220 | orchestrator | 2025-06-11 14:26:57.867232 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-11 14:26:57.867245 | orchestrator | Wednesday 11 June 2025 14:26:50 +0000 (0:00:00.665) 0:00:05.245 ******** 2025-06-11 14:26:57.867257 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.867269 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.867281 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.867293 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:57.867304 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.867315 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.867325 | orchestrator | 2025-06-11 14:26:57.867336 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-11 14:26:57.867347 | orchestrator | Wednesday 11 June 2025 14:26:51 +0000 (0:00:00.912) 0:00:06.158 ******** 2025-06-11 14:26:57.867358 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-11 14:26:57.867369 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-11 14:26:57.867379 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-11 14:26:57.867390 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-11 14:26:57.867401 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-11 14:26:57.867411 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-11 14:26:57.867422 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-11 14:26:57.867432 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-11 14:26:57.867443 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-11 14:26:57.867454 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-11 14:26:57.867464 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-11 14:26:57.867475 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-11 14:26:57.867486 | orchestrator | 2025-06-11 14:26:57.867496 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-11 14:26:57.867507 | orchestrator | Wednesday 11 June 2025 14:26:52 +0000 (0:00:01.164) 0:00:07.322 ******** 2025-06-11 14:26:57.867518 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.867529 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.867539 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.867550 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.867560 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:57.867571 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.867582 | orchestrator | 2025-06-11 14:26:57.867592 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-11 14:26:57.867611 | orchestrator | Wednesday 11 June 2025 14:26:53 +0000 (0:00:01.312) 0:00:08.634 ******** 2025-06-11 14:26:57.867623 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-11 14:26:57.867634 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-11 14:26:57.867645 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-11 14:26:57.867656 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867684 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867696 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867707 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867718 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867728 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-11 14:26:57.867739 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867750 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867760 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867797 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867808 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867819 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-11 14:26:57.867830 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867840 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867851 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867861 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867872 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867882 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-11 14:26:57.867893 | orchestrator | 2025-06-11 14:26:57.867904 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-11 14:26:57.867915 | orchestrator | Wednesday 11 June 2025 14:26:55 +0000 (0:00:01.260) 0:00:09.894 ******** 2025-06-11 14:26:57.867926 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.867937 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.867947 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:57.867958 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.867969 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.867980 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.867990 | orchestrator | 2025-06-11 14:26:57.868001 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-11 14:26:57.868012 | orchestrator | Wednesday 11 June 2025 14:26:55 +0000 (0:00:00.597) 0:00:10.492 ******** 2025-06-11 14:26:57.868022 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:26:57.868033 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:26:57.868044 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:26:57.868055 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:26:57.868065 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:26:57.868076 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:26:57.868087 | orchestrator | 2025-06-11 14:26:57.868115 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-11 14:26:57.868126 | orchestrator | Wednesday 11 June 2025 14:26:55 +0000 (0:00:00.205) 0:00:10.698 ******** 2025-06-11 14:26:57.868137 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 14:26:57.868148 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 14:26:57.868163 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.868181 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.868192 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:26:57.868203 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.868214 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 14:26:57.868224 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:57.868235 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-11 14:26:57.868245 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.868256 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-11 14:26:57.868267 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.868277 | orchestrator | 2025-06-11 14:26:57.868288 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-11 14:26:57.868299 | orchestrator | Wednesday 11 June 2025 14:26:56 +0000 (0:00:00.721) 0:00:11.419 ******** 2025-06-11 14:26:57.868310 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:26:57.868320 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:26:57.868331 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:26:57.868341 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:26:57.868352 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:26:57.868362 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:26:57.868373 | orchestrator | 2025-06-11 14:26:57.868383 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-11 14:26:57.868394 | orchestrator | Wednesday 11 June 2025 14:26:56 +0000 (0:00:00.143) 0:00:11.563 ******** 2025-06-11 14:26:57.868405 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:26:57.868416 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:26:57.868426 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:26:57.868437 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:26:57.868447 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:26:57.868458 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:26:57.868468 | orchestrator | 2025-06-11 14:26:57.868479 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-11 14:26:57.868490 | orchestrator | Wednesday 11 June 2025 14:26:56 +0000 (0:00:00.140) 0:00:11.703 ******** 2025-06-11 14:26:57.868501 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:26:57.868511 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:26:57.868522 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:26:57.868532 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:26:57.868543 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:26:57.868553 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:26:57.868564 | orchestrator | 2025-06-11 14:26:57.868580 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-11 14:26:57.868591 | orchestrator | Wednesday 11 June 2025 14:26:57 +0000 (0:00:00.178) 0:00:11.881 ******** 2025-06-11 14:26:57.868601 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:26:57.868612 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:26:57.868623 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:26:57.868634 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:26:57.868644 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:26:57.868662 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:26:58.318890 | orchestrator | 2025-06-11 14:26:58.319016 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-11 14:26:58.319035 | orchestrator | Wednesday 11 June 2025 14:26:57 +0000 (0:00:00.833) 0:00:12.715 ******** 2025-06-11 14:26:58.319048 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:26:58.319061 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:26:58.319072 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:26:58.319082 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:26:58.319093 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:26:58.319104 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:26:58.319115 | orchestrator | 2025-06-11 14:26:58.319126 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:26:58.319139 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319177 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319189 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319200 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319211 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319221 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:26:58.319232 | orchestrator | 2025-06-11 14:26:58.319243 | orchestrator | 2025-06-11 14:26:58.319254 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:26:58.319265 | orchestrator | Wednesday 11 June 2025 14:26:58 +0000 (0:00:00.256) 0:00:12.972 ******** 2025-06-11 14:26:58.319275 | orchestrator | =============================================================================== 2025-06-11 14:26:58.319286 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-06-11 14:26:58.319297 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2025-06-11 14:26:58.319307 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-06-11 14:26:58.319319 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-06-11 14:26:58.319330 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2025-06-11 14:26:58.319340 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.83s 2025-06-11 14:26:58.319351 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2025-06-11 14:26:58.319362 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-06-11 14:26:58.319372 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-06-11 14:26:58.319383 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-06-11 14:26:58.319396 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-06-11 14:26:58.319408 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-06-11 14:26:58.319420 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-06-11 14:26:58.319432 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-06-11 14:26:58.319445 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-11 14:26:58.319457 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-11 14:26:58.319469 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-11 14:26:58.550867 | orchestrator | + osism apply --environment custom facts 2025-06-11 14:27:00.159453 | orchestrator | 2025-06-11 14:27:00 | INFO  | Trying to run play facts in environment custom 2025-06-11 14:27:00.163574 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:27:00.163629 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:27:00.163651 | orchestrator | Registering Redlock._release_script 2025-06-11 14:27:00.221265 | orchestrator | 2025-06-11 14:27:00 | INFO  | Task db5333ff-8c9b-4d14-a133-0e79843e596d (facts) was prepared for execution. 2025-06-11 14:27:00.221390 | orchestrator | 2025-06-11 14:27:00 | INFO  | It takes a moment until task db5333ff-8c9b-4d14-a133-0e79843e596d (facts) has been started and output is visible here. 2025-06-11 14:27:40.070389 | orchestrator | 2025-06-11 14:27:40.070515 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-11 14:27:40.070542 | orchestrator | 2025-06-11 14:27:40.070563 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-11 14:27:40.070583 | orchestrator | Wednesday 11 June 2025 14:27:03 +0000 (0:00:00.082) 0:00:00.082 ******** 2025-06-11 14:27:40.070603 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:40.070622 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.070642 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:27:40.070659 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:27:40.070678 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.070698 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.070717 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:27:40.070736 | orchestrator | 2025-06-11 14:27:40.070785 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-11 14:27:40.070796 | orchestrator | Wednesday 11 June 2025 14:27:05 +0000 (0:00:01.435) 0:00:01.518 ******** 2025-06-11 14:27:40.070807 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:40.070818 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.070829 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.070840 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:27:40.070851 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:27:40.070862 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:27:40.070872 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.070883 | orchestrator | 2025-06-11 14:27:40.070893 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-11 14:27:40.070904 | orchestrator | 2025-06-11 14:27:40.070915 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-11 14:27:40.070926 | orchestrator | Wednesday 11 June 2025 14:27:06 +0000 (0:00:01.245) 0:00:02.763 ******** 2025-06-11 14:27:40.070936 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.070947 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.070958 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.070969 | orchestrator | 2025-06-11 14:27:40.070980 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-11 14:27:40.070991 | orchestrator | Wednesday 11 June 2025 14:27:06 +0000 (0:00:00.115) 0:00:02.878 ******** 2025-06-11 14:27:40.071002 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.071012 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.071023 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.071034 | orchestrator | 2025-06-11 14:27:40.071045 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-11 14:27:40.071055 | orchestrator | Wednesday 11 June 2025 14:27:06 +0000 (0:00:00.204) 0:00:03.083 ******** 2025-06-11 14:27:40.071067 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.071077 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.071088 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.071099 | orchestrator | 2025-06-11 14:27:40.071110 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-11 14:27:40.071120 | orchestrator | Wednesday 11 June 2025 14:27:07 +0000 (0:00:00.207) 0:00:03.291 ******** 2025-06-11 14:27:40.071133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:27:40.071145 | orchestrator | 2025-06-11 14:27:40.071156 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-11 14:27:40.071187 | orchestrator | Wednesday 11 June 2025 14:27:07 +0000 (0:00:00.155) 0:00:03.446 ******** 2025-06-11 14:27:40.071199 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.071210 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.071220 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.071231 | orchestrator | 2025-06-11 14:27:40.071242 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-11 14:27:40.071252 | orchestrator | Wednesday 11 June 2025 14:27:07 +0000 (0:00:00.434) 0:00:03.881 ******** 2025-06-11 14:27:40.071288 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:27:40.071299 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:27:40.071310 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:27:40.071320 | orchestrator | 2025-06-11 14:27:40.071331 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-11 14:27:40.071342 | orchestrator | Wednesday 11 June 2025 14:27:07 +0000 (0:00:00.093) 0:00:03.974 ******** 2025-06-11 14:27:40.071353 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.071363 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.071374 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.071384 | orchestrator | 2025-06-11 14:27:40.071395 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-11 14:27:40.071405 | orchestrator | Wednesday 11 June 2025 14:27:08 +0000 (0:00:01.042) 0:00:05.017 ******** 2025-06-11 14:27:40.071416 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.071427 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.071437 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.071448 | orchestrator | 2025-06-11 14:27:40.071459 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-11 14:27:40.071470 | orchestrator | Wednesday 11 June 2025 14:27:09 +0000 (0:00:00.529) 0:00:05.546 ******** 2025-06-11 14:27:40.071481 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.071491 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.071502 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.071513 | orchestrator | 2025-06-11 14:27:40.071523 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-11 14:27:40.071534 | orchestrator | Wednesday 11 June 2025 14:27:10 +0000 (0:00:01.066) 0:00:06.612 ******** 2025-06-11 14:27:40.071544 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.071555 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.071565 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.071576 | orchestrator | 2025-06-11 14:27:40.071586 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-11 14:27:40.071597 | orchestrator | Wednesday 11 June 2025 14:27:24 +0000 (0:00:13.640) 0:00:20.253 ******** 2025-06-11 14:27:40.071607 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:27:40.071618 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:27:40.071629 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:27:40.071639 | orchestrator | 2025-06-11 14:27:40.071650 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-11 14:27:40.071681 | orchestrator | Wednesday 11 June 2025 14:27:24 +0000 (0:00:00.098) 0:00:20.351 ******** 2025-06-11 14:27:40.071698 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:40.071711 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:40.071729 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:40.071799 | orchestrator | 2025-06-11 14:27:40.071820 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-11 14:27:40.071838 | orchestrator | Wednesday 11 June 2025 14:27:31 +0000 (0:00:07.169) 0:00:27.521 ******** 2025-06-11 14:27:40.071856 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.071875 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.071893 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.071912 | orchestrator | 2025-06-11 14:27:40.071929 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-11 14:27:40.071947 | orchestrator | Wednesday 11 June 2025 14:27:31 +0000 (0:00:00.424) 0:00:27.945 ******** 2025-06-11 14:27:40.071964 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-11 14:27:40.071976 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-11 14:27:40.071986 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-11 14:27:40.071997 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-11 14:27:40.072008 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-11 14:27:40.072030 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-11 14:27:40.072041 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-11 14:27:40.072051 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-11 14:27:40.072062 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-11 14:27:40.072073 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-11 14:27:40.072083 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-11 14:27:40.072094 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-11 14:27:40.072104 | orchestrator | 2025-06-11 14:27:40.072115 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-11 14:27:40.072126 | orchestrator | Wednesday 11 June 2025 14:27:35 +0000 (0:00:03.392) 0:00:31.338 ******** 2025-06-11 14:27:40.072136 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.072147 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.072157 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.072168 | orchestrator | 2025-06-11 14:27:40.072179 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 14:27:40.072189 | orchestrator | 2025-06-11 14:27:40.072200 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:27:40.072210 | orchestrator | Wednesday 11 June 2025 14:27:36 +0000 (0:00:01.155) 0:00:32.493 ******** 2025-06-11 14:27:40.072221 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:27:40.072232 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:27:40.072242 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:27:40.072253 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:40.072263 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:40.072274 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:40.072284 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:40.072347 | orchestrator | 2025-06-11 14:27:40.072360 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:27:40.072372 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:27:40.072383 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:27:40.072396 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:27:40.072407 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:27:40.072418 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:27:40.072428 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:27:40.072439 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:27:40.072450 | orchestrator | 2025-06-11 14:27:40.072461 | orchestrator | 2025-06-11 14:27:40.072472 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:27:40.072483 | orchestrator | Wednesday 11 June 2025 14:27:40 +0000 (0:00:03.704) 0:00:36.197 ******** 2025-06-11 14:27:40.072494 | orchestrator | =============================================================================== 2025-06-11 14:27:40.072509 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.64s 2025-06-11 14:27:40.072528 | orchestrator | Install required packages (Debian) -------------------------------------- 7.17s 2025-06-11 14:27:40.072546 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.70s 2025-06-11 14:27:40.072577 | orchestrator | Copy fact files --------------------------------------------------------- 3.39s 2025-06-11 14:27:40.072597 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-06-11 14:27:40.072616 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2025-06-11 14:27:40.072649 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.16s 2025-06-11 14:27:40.279368 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-06-11 14:27:40.279476 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-06-11 14:27:40.279490 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.53s 2025-06-11 14:27:40.279502 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-11 14:27:40.279513 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-11 14:27:40.279524 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-06-11 14:27:40.279535 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-11 14:27:40.279546 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-06-11 14:27:40.279558 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-06-11 14:27:40.279569 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-06-11 14:27:40.279580 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-06-11 14:27:40.505701 | orchestrator | + osism apply bootstrap 2025-06-11 14:27:42.150562 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:27:42.150663 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:27:42.150678 | orchestrator | Registering Redlock._release_script 2025-06-11 14:27:42.209733 | orchestrator | 2025-06-11 14:27:42 | INFO  | Task b91a266a-f3ff-4f34-a9e4-0a9676a0e38d (bootstrap) was prepared for execution. 2025-06-11 14:27:42.209868 | orchestrator | 2025-06-11 14:27:42 | INFO  | It takes a moment until task b91a266a-f3ff-4f34-a9e4-0a9676a0e38d (bootstrap) has been started and output is visible here. 2025-06-11 14:27:57.673897 | orchestrator | 2025-06-11 14:27:57.674004 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-11 14:27:57.674079 | orchestrator | 2025-06-11 14:27:57.674092 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-11 14:27:57.674104 | orchestrator | Wednesday 11 June 2025 14:27:46 +0000 (0:00:00.162) 0:00:00.162 ******** 2025-06-11 14:27:57.674115 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:57.674127 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:57.674138 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:57.674149 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:57.674160 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:27:57.674171 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:27:57.674182 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:27:57.674193 | orchestrator | 2025-06-11 14:27:57.674204 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 14:27:57.674215 | orchestrator | 2025-06-11 14:27:57.674226 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:27:57.674237 | orchestrator | Wednesday 11 June 2025 14:27:46 +0000 (0:00:00.248) 0:00:00.410 ******** 2025-06-11 14:27:57.674248 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:27:57.674259 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:27:57.674270 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:27:57.674281 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:57.674291 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:57.674302 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:57.674313 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:57.674324 | orchestrator | 2025-06-11 14:27:57.674335 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-11 14:27:57.674346 | orchestrator | 2025-06-11 14:27:57.674382 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:27:57.674395 | orchestrator | Wednesday 11 June 2025 14:27:50 +0000 (0:00:03.627) 0:00:04.037 ******** 2025-06-11 14:27:57.674408 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-11 14:27:57.674420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-11 14:27:57.674432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-11 14:27:57.674444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:27:57.674457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-11 14:27:57.674469 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-11 14:27:57.674480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:27:57.674492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-11 14:27:57.674522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-11 14:27:57.674534 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-11 14:27:57.674547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-11 14:27:57.674558 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-11 14:27:57.674571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:27:57.674583 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-11 14:27:57.674595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-11 14:27:57.674607 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-11 14:27:57.674619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-11 14:27:57.674630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-11 14:27:57.674642 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-11 14:27:57.674655 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:27:57.674667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-11 14:27:57.674678 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-11 14:27:57.674691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-11 14:27:57.674703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-11 14:27:57.674721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-11 14:27:57.674762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-11 14:27:57.674780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-11 14:27:57.674796 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:27:57.674811 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-11 14:27:57.674828 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-11 14:27:57.674845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-11 14:27:57.674864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-11 14:27:57.674882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-11 14:27:57.674895 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-11 14:27:57.674905 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:27:57.674934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-11 14:27:57.674953 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:27:57.674970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-11 14:27:57.674986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-11 14:27:57.675003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-11 14:27:57.675020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-11 14:27:57.675037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-11 14:27:57.675052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:27:57.675079 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-11 14:27:57.675097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-11 14:27:57.675115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:27:57.675156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-11 14:27:57.675174 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:27:57.675191 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-11 14:27:57.675210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:27:57.675227 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:27:57.675247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-11 14:27:57.675265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-11 14:27:57.675283 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-11 14:27:57.675297 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-11 14:27:57.675308 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:27:57.675318 | orchestrator | 2025-06-11 14:27:57.675329 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-11 14:27:57.675340 | orchestrator | 2025-06-11 14:27:57.675350 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-11 14:27:57.675361 | orchestrator | Wednesday 11 June 2025 14:27:50 +0000 (0:00:00.481) 0:00:04.519 ******** 2025-06-11 14:27:57.675372 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:57.675383 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:27:57.675393 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:57.675404 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:27:57.675414 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:57.675424 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:57.675435 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:27:57.675445 | orchestrator | 2025-06-11 14:27:57.675456 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-11 14:27:57.675467 | orchestrator | Wednesday 11 June 2025 14:27:51 +0000 (0:00:01.349) 0:00:05.869 ******** 2025-06-11 14:27:57.675477 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:57.675488 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:27:57.675499 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:27:57.675509 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:27:57.675520 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:27:57.675530 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:27:57.675540 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:27:57.675551 | orchestrator | 2025-06-11 14:27:57.675561 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-11 14:27:57.675572 | orchestrator | Wednesday 11 June 2025 14:27:53 +0000 (0:00:01.160) 0:00:07.029 ******** 2025-06-11 14:27:57.675584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:27:57.675597 | orchestrator | 2025-06-11 14:27:57.675608 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-11 14:27:57.675619 | orchestrator | Wednesday 11 June 2025 14:27:53 +0000 (0:00:00.244) 0:00:07.274 ******** 2025-06-11 14:27:57.675630 | orchestrator | changed: [testbed-manager] 2025-06-11 14:27:57.675641 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:57.675651 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:57.675662 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:27:57.675672 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:27:57.675683 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:27:57.675694 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:57.675704 | orchestrator | 2025-06-11 14:27:57.675715 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-11 14:27:57.675726 | orchestrator | Wednesday 11 June 2025 14:27:55 +0000 (0:00:02.037) 0:00:09.311 ******** 2025-06-11 14:27:57.675774 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:27:57.675787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:27:57.675800 | orchestrator | 2025-06-11 14:27:57.675811 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-11 14:27:57.675828 | orchestrator | Wednesday 11 June 2025 14:27:55 +0000 (0:00:00.231) 0:00:09.542 ******** 2025-06-11 14:27:57.675839 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:57.675850 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:57.675860 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:57.675871 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:27:57.675881 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:27:57.675892 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:27:57.675902 | orchestrator | 2025-06-11 14:27:57.675913 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-11 14:27:57.675923 | orchestrator | Wednesday 11 June 2025 14:27:56 +0000 (0:00:00.964) 0:00:10.506 ******** 2025-06-11 14:27:57.675934 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:27:57.675945 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:27:57.675955 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:27:57.675966 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:27:57.675976 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:27:57.675987 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:27:57.675997 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:27:57.676007 | orchestrator | 2025-06-11 14:27:57.676018 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-11 14:27:57.676029 | orchestrator | Wednesday 11 June 2025 14:27:57 +0000 (0:00:00.574) 0:00:11.081 ******** 2025-06-11 14:27:57.676039 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:27:57.676050 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:27:57.676061 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:27:57.676071 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:27:57.676082 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:27:57.676092 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:27:57.676103 | orchestrator | ok: [testbed-manager] 2025-06-11 14:27:57.676113 | orchestrator | 2025-06-11 14:27:57.676124 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-11 14:27:57.676136 | orchestrator | Wednesday 11 June 2025 14:27:57 +0000 (0:00:00.413) 0:00:11.494 ******** 2025-06-11 14:27:57.676147 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:27:57.676157 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:27:57.676176 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:28:09.194868 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:28:09.194987 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:28:09.195003 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:28:09.195016 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:28:09.195028 | orchestrator | 2025-06-11 14:28:09.195041 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-11 14:28:09.195054 | orchestrator | Wednesday 11 June 2025 14:27:57 +0000 (0:00:00.225) 0:00:11.720 ******** 2025-06-11 14:28:09.195068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:09.195097 | orchestrator | 2025-06-11 14:28:09.195109 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-11 14:28:09.195121 | orchestrator | Wednesday 11 June 2025 14:27:58 +0000 (0:00:00.274) 0:00:11.994 ******** 2025-06-11 14:28:09.195132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:09.195167 | orchestrator | 2025-06-11 14:28:09.195180 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-11 14:28:09.195191 | orchestrator | Wednesday 11 June 2025 14:27:58 +0000 (0:00:00.322) 0:00:12.317 ******** 2025-06-11 14:28:09.195202 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.195214 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.195225 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.195236 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.195246 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.195257 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.195268 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.195279 | orchestrator | 2025-06-11 14:28:09.195290 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-11 14:28:09.195301 | orchestrator | Wednesday 11 June 2025 14:27:59 +0000 (0:00:01.152) 0:00:13.470 ******** 2025-06-11 14:28:09.195312 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:28:09.195323 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:28:09.195334 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:28:09.195344 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:28:09.195355 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:28:09.195366 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:28:09.195377 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:28:09.195388 | orchestrator | 2025-06-11 14:28:09.195398 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-11 14:28:09.195409 | orchestrator | Wednesday 11 June 2025 14:27:59 +0000 (0:00:00.213) 0:00:13.683 ******** 2025-06-11 14:28:09.195420 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.195431 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.195442 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.195453 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.195464 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.195475 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.195485 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.195496 | orchestrator | 2025-06-11 14:28:09.195507 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-11 14:28:09.195518 | orchestrator | Wednesday 11 June 2025 14:28:00 +0000 (0:00:00.537) 0:00:14.220 ******** 2025-06-11 14:28:09.195529 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:28:09.195540 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:28:09.195551 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:28:09.195562 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:28:09.195573 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:28:09.195583 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:28:09.195594 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:28:09.195605 | orchestrator | 2025-06-11 14:28:09.195616 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-11 14:28:09.195628 | orchestrator | Wednesday 11 June 2025 14:28:00 +0000 (0:00:00.224) 0:00:14.445 ******** 2025-06-11 14:28:09.195639 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.195650 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:09.195660 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:09.195671 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:09.195682 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:09.195693 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:09.195703 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:09.195714 | orchestrator | 2025-06-11 14:28:09.195746 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-11 14:28:09.195757 | orchestrator | Wednesday 11 June 2025 14:28:01 +0000 (0:00:00.532) 0:00:14.977 ******** 2025-06-11 14:28:09.195768 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.195779 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:09.195790 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:09.195808 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:09.195819 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:09.195830 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:09.195840 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:09.195851 | orchestrator | 2025-06-11 14:28:09.195862 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-11 14:28:09.195873 | orchestrator | Wednesday 11 June 2025 14:28:02 +0000 (0:00:01.078) 0:00:16.056 ******** 2025-06-11 14:28:09.195885 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.195896 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.195907 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.195959 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.195971 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.195982 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.195993 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.196004 | orchestrator | 2025-06-11 14:28:09.196014 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-11 14:28:09.196026 | orchestrator | Wednesday 11 June 2025 14:28:03 +0000 (0:00:01.085) 0:00:17.141 ******** 2025-06-11 14:28:09.196055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:09.196067 | orchestrator | 2025-06-11 14:28:09.196078 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-11 14:28:09.196089 | orchestrator | Wednesday 11 June 2025 14:28:03 +0000 (0:00:00.415) 0:00:17.557 ******** 2025-06-11 14:28:09.196100 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:28:09.196111 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:09.196122 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:09.196133 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:09.196143 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:09.196154 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:09.196165 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:09.196175 | orchestrator | 2025-06-11 14:28:09.196186 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-11 14:28:09.196197 | orchestrator | Wednesday 11 June 2025 14:28:04 +0000 (0:00:01.218) 0:00:18.775 ******** 2025-06-11 14:28:09.196208 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196219 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196230 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196241 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196252 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.196263 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.196274 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.196284 | orchestrator | 2025-06-11 14:28:09.196295 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-11 14:28:09.196306 | orchestrator | Wednesday 11 June 2025 14:28:05 +0000 (0:00:00.223) 0:00:18.999 ******** 2025-06-11 14:28:09.196317 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196328 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196339 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196350 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196360 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.196371 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.196382 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.196393 | orchestrator | 2025-06-11 14:28:09.196403 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-11 14:28:09.196414 | orchestrator | Wednesday 11 June 2025 14:28:05 +0000 (0:00:00.244) 0:00:19.243 ******** 2025-06-11 14:28:09.196425 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196436 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196447 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196457 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196468 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.196485 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.196496 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.196507 | orchestrator | 2025-06-11 14:28:09.196518 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-11 14:28:09.196529 | orchestrator | Wednesday 11 June 2025 14:28:05 +0000 (0:00:00.212) 0:00:19.456 ******** 2025-06-11 14:28:09.196541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:09.196554 | orchestrator | 2025-06-11 14:28:09.196565 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-11 14:28:09.196576 | orchestrator | Wednesday 11 June 2025 14:28:05 +0000 (0:00:00.265) 0:00:19.722 ******** 2025-06-11 14:28:09.196587 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196598 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196609 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196619 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196630 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.196641 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.196651 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.196662 | orchestrator | 2025-06-11 14:28:09.196673 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-11 14:28:09.196684 | orchestrator | Wednesday 11 June 2025 14:28:06 +0000 (0:00:00.517) 0:00:20.239 ******** 2025-06-11 14:28:09.196700 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:28:09.196711 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:28:09.196751 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:28:09.196763 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:28:09.196774 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:28:09.196785 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:28:09.196795 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:28:09.196806 | orchestrator | 2025-06-11 14:28:09.196817 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-11 14:28:09.196828 | orchestrator | Wednesday 11 June 2025 14:28:06 +0000 (0:00:00.208) 0:00:20.447 ******** 2025-06-11 14:28:09.196839 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196850 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196861 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196871 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196882 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:09.196893 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:09.196904 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:09.196915 | orchestrator | 2025-06-11 14:28:09.196926 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-11 14:28:09.196937 | orchestrator | Wednesday 11 June 2025 14:28:07 +0000 (0:00:01.010) 0:00:21.457 ******** 2025-06-11 14:28:09.196948 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.196959 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.196970 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.196981 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.196991 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:09.197002 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:09.197013 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:09.197023 | orchestrator | 2025-06-11 14:28:09.197034 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-11 14:28:09.197046 | orchestrator | Wednesday 11 June 2025 14:28:08 +0000 (0:00:00.534) 0:00:21.992 ******** 2025-06-11 14:28:09.197057 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:09.197067 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:09.197078 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:09.197088 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:09.197107 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.560903 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.560992 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.561019 | orchestrator | 2025-06-11 14:28:45.561027 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-11 14:28:45.561035 | orchestrator | Wednesday 11 June 2025 14:28:09 +0000 (0:00:01.152) 0:00:23.145 ******** 2025-06-11 14:28:45.561041 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561049 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561055 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561061 | orchestrator | changed: [testbed-manager] 2025-06-11 14:28:45.561067 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.561074 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.561080 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.561086 | orchestrator | 2025-06-11 14:28:45.561093 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-11 14:28:45.561099 | orchestrator | Wednesday 11 June 2025 14:28:22 +0000 (0:00:13.765) 0:00:36.910 ******** 2025-06-11 14:28:45.561106 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561112 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561118 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561139 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561146 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561197 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561204 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561210 | orchestrator | 2025-06-11 14:28:45.561215 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-11 14:28:45.561221 | orchestrator | Wednesday 11 June 2025 14:28:23 +0000 (0:00:00.213) 0:00:37.123 ******** 2025-06-11 14:28:45.561227 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561233 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561265 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561271 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561277 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561283 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561289 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561294 | orchestrator | 2025-06-11 14:28:45.561300 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-11 14:28:45.561306 | orchestrator | Wednesday 11 June 2025 14:28:23 +0000 (0:00:00.233) 0:00:37.357 ******** 2025-06-11 14:28:45.561312 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561318 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561323 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561329 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561335 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561340 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561346 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561352 | orchestrator | 2025-06-11 14:28:45.561358 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-11 14:28:45.561364 | orchestrator | Wednesday 11 June 2025 14:28:23 +0000 (0:00:00.216) 0:00:37.574 ******** 2025-06-11 14:28:45.561372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:45.561379 | orchestrator | 2025-06-11 14:28:45.561385 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-11 14:28:45.561391 | orchestrator | Wednesday 11 June 2025 14:28:23 +0000 (0:00:00.289) 0:00:37.863 ******** 2025-06-11 14:28:45.561397 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561403 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561408 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561414 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561419 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561425 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561431 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561436 | orchestrator | 2025-06-11 14:28:45.561442 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-11 14:28:45.561455 | orchestrator | Wednesday 11 June 2025 14:28:25 +0000 (0:00:01.544) 0:00:39.407 ******** 2025-06-11 14:28:45.561461 | orchestrator | changed: [testbed-manager] 2025-06-11 14:28:45.561468 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:45.561486 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:45.561493 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.561499 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:45.561506 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.561513 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.561519 | orchestrator | 2025-06-11 14:28:45.561526 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-11 14:28:45.561532 | orchestrator | Wednesday 11 June 2025 14:28:26 +0000 (0:00:00.999) 0:00:40.406 ******** 2025-06-11 14:28:45.561539 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561545 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561551 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561558 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561564 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561570 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561577 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561583 | orchestrator | 2025-06-11 14:28:45.561589 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-11 14:28:45.561596 | orchestrator | Wednesday 11 June 2025 14:28:27 +0000 (0:00:00.808) 0:00:41.215 ******** 2025-06-11 14:28:45.561603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:45.561611 | orchestrator | 2025-06-11 14:28:45.561618 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-11 14:28:45.561625 | orchestrator | Wednesday 11 June 2025 14:28:27 +0000 (0:00:00.288) 0:00:41.504 ******** 2025-06-11 14:28:45.561631 | orchestrator | changed: [testbed-manager] 2025-06-11 14:28:45.561638 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:45.561644 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:45.561651 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.561657 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:45.561663 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.561670 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.561676 | orchestrator | 2025-06-11 14:28:45.561697 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-11 14:28:45.561719 | orchestrator | Wednesday 11 June 2025 14:28:28 +0000 (0:00:01.052) 0:00:42.556 ******** 2025-06-11 14:28:45.561725 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:28:45.561731 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:28:45.561738 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:28:45.561744 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:28:45.561750 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:28:45.561756 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:28:45.561762 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:28:45.561768 | orchestrator | 2025-06-11 14:28:45.561775 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-11 14:28:45.561781 | orchestrator | Wednesday 11 June 2025 14:28:28 +0000 (0:00:00.291) 0:00:42.848 ******** 2025-06-11 14:28:45.561788 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:45.561794 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:45.561801 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.561807 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.561813 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:45.561819 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.561825 | orchestrator | changed: [testbed-manager] 2025-06-11 14:28:45.561830 | orchestrator | 2025-06-11 14:28:45.561836 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-11 14:28:45.561842 | orchestrator | Wednesday 11 June 2025 14:28:40 +0000 (0:00:11.234) 0:00:54.082 ******** 2025-06-11 14:28:45.561852 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561858 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561864 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561870 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561875 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561881 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561887 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561892 | orchestrator | 2025-06-11 14:28:45.561898 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-11 14:28:45.561904 | orchestrator | Wednesday 11 June 2025 14:28:41 +0000 (0:00:01.533) 0:00:55.616 ******** 2025-06-11 14:28:45.561910 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561915 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561921 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561927 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561933 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.561938 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561944 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.561949 | orchestrator | 2025-06-11 14:28:45.561955 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-11 14:28:45.561961 | orchestrator | Wednesday 11 June 2025 14:28:42 +0000 (0:00:00.852) 0:00:56.469 ******** 2025-06-11 14:28:45.561967 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.561972 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.561978 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.561984 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.561989 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.561995 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.562001 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.562007 | orchestrator | 2025-06-11 14:28:45.562012 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-11 14:28:45.562082 | orchestrator | Wednesday 11 June 2025 14:28:42 +0000 (0:00:00.222) 0:00:56.691 ******** 2025-06-11 14:28:45.562088 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.562094 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.562100 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.562105 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.562111 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.562117 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.562122 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.562128 | orchestrator | 2025-06-11 14:28:45.562134 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-11 14:28:45.562140 | orchestrator | Wednesday 11 June 2025 14:28:42 +0000 (0:00:00.223) 0:00:56.914 ******** 2025-06-11 14:28:45.562146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:28:45.562152 | orchestrator | 2025-06-11 14:28:45.562158 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-11 14:28:45.562170 | orchestrator | Wednesday 11 June 2025 14:28:43 +0000 (0:00:00.248) 0:00:57.163 ******** 2025-06-11 14:28:45.562176 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.562182 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.562188 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.562193 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.562199 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.562205 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.562210 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.562216 | orchestrator | 2025-06-11 14:28:45.562222 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-11 14:28:45.562228 | orchestrator | Wednesday 11 June 2025 14:28:44 +0000 (0:00:01.569) 0:00:58.733 ******** 2025-06-11 14:28:45.562233 | orchestrator | changed: [testbed-manager] 2025-06-11 14:28:45.562239 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:28:45.562249 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:28:45.562255 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:28:45.562261 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:28:45.562267 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:28:45.562273 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:28:45.562278 | orchestrator | 2025-06-11 14:28:45.562284 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-11 14:28:45.562290 | orchestrator | Wednesday 11 June 2025 14:28:45 +0000 (0:00:00.557) 0:00:59.290 ******** 2025-06-11 14:28:45.562296 | orchestrator | ok: [testbed-manager] 2025-06-11 14:28:45.562302 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:28:45.562307 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:28:45.562313 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:28:45.562319 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:28:45.562324 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:28:45.562330 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:28:45.562336 | orchestrator | 2025-06-11 14:28:45.562347 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-11 14:30:59.690316 | orchestrator | Wednesday 11 June 2025 14:28:45 +0000 (0:00:00.216) 0:00:59.507 ******** 2025-06-11 14:30:59.690415 | orchestrator | ok: [testbed-manager] 2025-06-11 14:30:59.690427 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:30:59.690436 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:30:59.690443 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:30:59.690450 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:30:59.690458 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:30:59.690465 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:30:59.690471 | orchestrator | 2025-06-11 14:30:59.690479 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-11 14:30:59.690485 | orchestrator | Wednesday 11 June 2025 14:28:46 +0000 (0:00:01.116) 0:01:00.623 ******** 2025-06-11 14:30:59.690491 | orchestrator | changed: [testbed-manager] 2025-06-11 14:30:59.690499 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:30:59.690506 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:30:59.690513 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:30:59.690520 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:30:59.690528 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:30:59.690534 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:30:59.690541 | orchestrator | 2025-06-11 14:30:59.690547 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-11 14:30:59.690555 | orchestrator | Wednesday 11 June 2025 14:28:48 +0000 (0:00:01.577) 0:01:02.201 ******** 2025-06-11 14:30:59.690562 | orchestrator | ok: [testbed-manager] 2025-06-11 14:30:59.690568 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:30:59.690575 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:30:59.690582 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:30:59.690590 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:30:59.690609 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:30:59.690617 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:30:59.690624 | orchestrator | 2025-06-11 14:30:59.690632 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-11 14:30:59.690689 | orchestrator | Wednesday 11 June 2025 14:28:50 +0000 (0:00:02.257) 0:01:04.458 ******** 2025-06-11 14:30:59.690698 | orchestrator | ok: [testbed-manager] 2025-06-11 14:30:59.690706 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:30:59.690714 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:30:59.690722 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:30:59.690730 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:30:59.690737 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:30:59.690745 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:30:59.690753 | orchestrator | 2025-06-11 14:30:59.690761 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-11 14:30:59.690769 | orchestrator | Wednesday 11 June 2025 14:29:31 +0000 (0:00:40.564) 0:01:45.023 ******** 2025-06-11 14:30:59.690777 | orchestrator | changed: [testbed-manager] 2025-06-11 14:30:59.690807 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:30:59.690816 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:30:59.690824 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:30:59.690832 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:30:59.690840 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:30:59.690848 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:30:59.690856 | orchestrator | 2025-06-11 14:30:59.690864 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-11 14:30:59.690873 | orchestrator | Wednesday 11 June 2025 14:30:45 +0000 (0:01:14.274) 0:02:59.297 ******** 2025-06-11 14:30:59.690881 | orchestrator | ok: [testbed-manager] 2025-06-11 14:30:59.690888 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:30:59.690897 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:30:59.690904 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:30:59.690912 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:30:59.690920 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:30:59.690928 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:30:59.690936 | orchestrator | 2025-06-11 14:30:59.690945 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-11 14:30:59.690955 | orchestrator | Wednesday 11 June 2025 14:30:46 +0000 (0:00:01.489) 0:03:00.786 ******** 2025-06-11 14:30:59.690963 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:30:59.690970 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:30:59.690978 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:30:59.690985 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:30:59.690993 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:30:59.691001 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:30:59.691024 | orchestrator | changed: [testbed-manager] 2025-06-11 14:30:59.691031 | orchestrator | 2025-06-11 14:30:59.691038 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-11 14:30:59.691045 | orchestrator | Wednesday 11 June 2025 14:30:58 +0000 (0:00:11.680) 0:03:12.467 ******** 2025-06-11 14:30:59.691060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-11 14:30:59.691073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-11 14:30:59.691103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-11 14:30:59.691116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-11 14:30:59.691125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-11 14:30:59.691141 | orchestrator | 2025-06-11 14:30:59.691149 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-11 14:30:59.691157 | orchestrator | Wednesday 11 June 2025 14:30:58 +0000 (0:00:00.386) 0:03:12.853 ******** 2025-06-11 14:30:59.691165 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-11 14:30:59.691173 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:30:59.691181 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-11 14:30:59.691189 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:30:59.691197 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-11 14:30:59.691204 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-11 14:30:59.691212 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:30:59.691220 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:30:59.691228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:30:59.691236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:30:59.691244 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:30:59.691251 | orchestrator | 2025-06-11 14:30:59.691259 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-11 14:30:59.691267 | orchestrator | Wednesday 11 June 2025 14:30:59 +0000 (0:00:00.673) 0:03:13.526 ******** 2025-06-11 14:30:59.691275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-11 14:30:59.691284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-11 14:30:59.691292 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-11 14:30:59.691300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-11 14:30:59.691308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-11 14:30:59.691316 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-11 14:30:59.691327 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-11 14:30:59.691335 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-11 14:30:59.691343 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-11 14:30:59.691351 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-11 14:30:59.691358 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-11 14:30:59.691366 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-11 14:30:59.691373 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-11 14:30:59.691439 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-11 14:30:59.691447 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-11 14:30:59.691455 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-11 14:30:59.691463 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-11 14:30:59.691470 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-11 14:30:59.691484 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-11 14:30:59.691492 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-11 14:30:59.691506 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-11 14:31:05.160026 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-11 14:31:05.160138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-11 14:31:05.160152 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-11 14:31:05.160166 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:05.160179 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-11 14:31:05.160190 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-11 14:31:05.160201 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-11 14:31:05.160212 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-11 14:31:05.160223 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-11 14:31:05.160234 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-11 14:31:05.160245 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-11 14:31:05.160255 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-11 14:31:05.160266 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-11 14:31:05.160277 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-11 14:31:05.160288 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-11 14:31:05.160299 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-11 14:31:05.160309 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-11 14:31:05.160321 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:31:05.160333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-11 14:31:05.160344 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-11 14:31:05.160354 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-11 14:31:05.160365 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:31:05.160376 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:31:05.160387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-11 14:31:05.160398 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-11 14:31:05.160409 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-11 14:31:05.160419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-11 14:31:05.160430 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-11 14:31:05.160441 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-11 14:31:05.160452 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-11 14:31:05.160463 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-11 14:31:05.160502 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-11 14:31:05.160513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160561 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-11 14:31:05.160603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-11 14:31:05.160616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-11 14:31:05.160628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-11 14:31:05.160668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-11 14:31:05.160680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-11 14:31:05.160693 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-11 14:31:05.160723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-11 14:31:05.160737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-11 14:31:05.160749 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-11 14:31:05.160761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-11 14:31:05.160773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-11 14:31:05.160785 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-11 14:31:05.160797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-11 14:31:05.160809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-11 14:31:05.160822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-11 14:31:05.160834 | orchestrator | 2025-06-11 14:31:05.160847 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-11 14:31:05.160859 | orchestrator | Wednesday 11 June 2025 14:31:03 +0000 (0:00:03.742) 0:03:17.268 ******** 2025-06-11 14:31:05.160872 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160884 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160895 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160906 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-11 14:31:05.160948 | orchestrator | 2025-06-11 14:31:05.160959 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-11 14:31:05.160969 | orchestrator | Wednesday 11 June 2025 14:31:03 +0000 (0:00:00.551) 0:03:17.820 ******** 2025-06-11 14:31:05.160989 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-11 14:31:05.161000 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:05.161011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-11 14:31:05.161022 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-11 14:31:05.161032 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:31:05.161043 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:31:05.161054 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-11 14:31:05.161064 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:31:05.161075 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-11 14:31:05.161085 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-11 14:31:05.161102 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-11 14:31:05.161113 | orchestrator | 2025-06-11 14:31:05.161123 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-11 14:31:05.161134 | orchestrator | Wednesday 11 June 2025 14:31:04 +0000 (0:00:00.486) 0:03:18.306 ******** 2025-06-11 14:31:05.161144 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-11 14:31:05.161155 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:05.161166 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-11 14:31:05.161177 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-11 14:31:05.161187 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:31:05.161198 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:31:05.161208 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-11 14:31:05.161219 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:31:05.161229 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-11 14:31:05.161240 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-11 14:31:05.161250 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-11 14:31:05.161261 | orchestrator | 2025-06-11 14:31:05.161271 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-11 14:31:05.161282 | orchestrator | Wednesday 11 June 2025 14:31:04 +0000 (0:00:00.544) 0:03:18.851 ******** 2025-06-11 14:31:05.161292 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:05.161303 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:31:05.161314 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:31:05.161324 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:31:05.161340 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:31:16.487286 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:31:16.487396 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:31:16.487412 | orchestrator | 2025-06-11 14:31:16.487425 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-11 14:31:16.487438 | orchestrator | Wednesday 11 June 2025 14:31:05 +0000 (0:00:00.263) 0:03:19.114 ******** 2025-06-11 14:31:16.487449 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:16.487461 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:16.487472 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:16.487482 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:16.487493 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:16.487503 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:16.487514 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:16.487548 | orchestrator | 2025-06-11 14:31:16.487560 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-11 14:31:16.487572 | orchestrator | Wednesday 11 June 2025 14:31:10 +0000 (0:00:05.522) 0:03:24.637 ******** 2025-06-11 14:31:16.487591 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-11 14:31:16.487610 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-11 14:31:16.487673 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:16.487695 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-11 14:31:16.487713 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:31:16.487731 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-11 14:31:16.487750 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:31:16.487773 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-11 14:31:16.487794 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:31:16.487814 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-11 14:31:16.487831 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:31:16.487844 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:31:16.487856 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-11 14:31:16.487867 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:31:16.487879 | orchestrator | 2025-06-11 14:31:16.487892 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-11 14:31:16.487904 | orchestrator | Wednesday 11 June 2025 14:31:10 +0000 (0:00:00.314) 0:03:24.951 ******** 2025-06-11 14:31:16.487917 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-11 14:31:16.487929 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-11 14:31:16.487941 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-11 14:31:16.487953 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-11 14:31:16.487965 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-11 14:31:16.487977 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-11 14:31:16.487988 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-11 14:31:16.488000 | orchestrator | 2025-06-11 14:31:16.488013 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-11 14:31:16.488025 | orchestrator | Wednesday 11 June 2025 14:31:11 +0000 (0:00:01.004) 0:03:25.956 ******** 2025-06-11 14:31:16.488040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:31:16.488054 | orchestrator | 2025-06-11 14:31:16.488067 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-11 14:31:16.488079 | orchestrator | Wednesday 11 June 2025 14:31:12 +0000 (0:00:00.419) 0:03:26.375 ******** 2025-06-11 14:31:16.488091 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:16.488104 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:16.488116 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:16.488128 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:16.488140 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:16.488152 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:16.488164 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:16.488177 | orchestrator | 2025-06-11 14:31:16.488187 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-11 14:31:16.488213 | orchestrator | Wednesday 11 June 2025 14:31:13 +0000 (0:00:01.294) 0:03:27.669 ******** 2025-06-11 14:31:16.488225 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:16.488235 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:16.488246 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:16.488256 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:16.488267 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:16.488277 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:16.488288 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:16.488298 | orchestrator | 2025-06-11 14:31:16.488309 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-11 14:31:16.488330 | orchestrator | Wednesday 11 June 2025 14:31:14 +0000 (0:00:00.600) 0:03:28.270 ******** 2025-06-11 14:31:16.488341 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:16.488352 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:16.488362 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:16.488373 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:16.488384 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:16.488394 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:16.488405 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:16.488421 | orchestrator | 2025-06-11 14:31:16.488439 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-11 14:31:16.488457 | orchestrator | Wednesday 11 June 2025 14:31:14 +0000 (0:00:00.603) 0:03:28.873 ******** 2025-06-11 14:31:16.488475 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:16.488494 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:16.488513 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:16.488531 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:16.488550 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:16.488561 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:16.488572 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:16.488582 | orchestrator | 2025-06-11 14:31:16.488593 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-11 14:31:16.488604 | orchestrator | Wednesday 11 June 2025 14:31:15 +0000 (0:00:00.599) 0:03:29.473 ******** 2025-06-11 14:31:16.488677 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650696.532968, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488696 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650748.667007, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488709 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650763.415536, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488720 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650761.2600849, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488732 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650766.6037803, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488752 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650771.8930008, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488763 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749650872.060696, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:16.488786 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650731.1904333, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605059 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650645.521259, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605169 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650664.2325168, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605202 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650658.324792, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605215 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650659.9055622, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605253 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650668.1822128, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605266 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749650774.4137495, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 14:31:39.605278 | orchestrator | 2025-06-11 14:31:39.605291 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-11 14:31:39.605304 | orchestrator | Wednesday 11 June 2025 14:31:16 +0000 (0:00:00.959) 0:03:30.433 ******** 2025-06-11 14:31:39.605315 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:39.605327 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.605338 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.605367 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:39.605378 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:39.605389 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:39.605399 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:39.605410 | orchestrator | 2025-06-11 14:31:39.605421 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-11 14:31:39.605432 | orchestrator | Wednesday 11 June 2025 14:31:17 +0000 (0:00:01.090) 0:03:31.523 ******** 2025-06-11 14:31:39.605443 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:39.605454 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.605464 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.605475 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:39.605504 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:39.605515 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:39.605526 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:39.605536 | orchestrator | 2025-06-11 14:31:39.605547 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-11 14:31:39.605558 | orchestrator | Wednesday 11 June 2025 14:31:18 +0000 (0:00:01.077) 0:03:32.601 ******** 2025-06-11 14:31:39.605569 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:39.605579 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.605590 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.605601 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:39.605611 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:39.605640 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:39.605651 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:39.605662 | orchestrator | 2025-06-11 14:31:39.605673 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-11 14:31:39.605684 | orchestrator | Wednesday 11 June 2025 14:31:19 +0000 (0:00:01.133) 0:03:33.735 ******** 2025-06-11 14:31:39.605695 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:31:39.605706 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:31:39.605725 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:31:39.605736 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:31:39.605747 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:31:39.605758 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:31:39.605769 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:31:39.605779 | orchestrator | 2025-06-11 14:31:39.605790 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-11 14:31:39.605801 | orchestrator | Wednesday 11 June 2025 14:31:20 +0000 (0:00:00.276) 0:03:34.011 ******** 2025-06-11 14:31:39.605811 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:39.605823 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:39.605834 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:39.605844 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:39.605855 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:39.605865 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:39.605876 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:39.605886 | orchestrator | 2025-06-11 14:31:39.605897 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-11 14:31:39.605908 | orchestrator | Wednesday 11 June 2025 14:31:20 +0000 (0:00:00.778) 0:03:34.790 ******** 2025-06-11 14:31:39.605921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:31:39.605934 | orchestrator | 2025-06-11 14:31:39.605945 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-11 14:31:39.605956 | orchestrator | Wednesday 11 June 2025 14:31:21 +0000 (0:00:00.405) 0:03:35.195 ******** 2025-06-11 14:31:39.605966 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:39.605977 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.605988 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.605999 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:39.606009 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:39.606076 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:39.606087 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:39.606098 | orchestrator | 2025-06-11 14:31:39.606109 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-11 14:31:39.606120 | orchestrator | Wednesday 11 June 2025 14:31:28 +0000 (0:00:07.354) 0:03:42.549 ******** 2025-06-11 14:31:39.606130 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:39.606141 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:39.606152 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:39.606163 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:39.606180 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:39.606191 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:39.606201 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:39.606212 | orchestrator | 2025-06-11 14:31:39.606223 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-11 14:31:39.606233 | orchestrator | Wednesday 11 June 2025 14:31:29 +0000 (0:00:01.126) 0:03:43.676 ******** 2025-06-11 14:31:39.606244 | orchestrator | ok: [testbed-manager] 2025-06-11 14:31:39.606255 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:31:39.606265 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:31:39.606276 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:31:39.606286 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:31:39.606296 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:31:39.606307 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:31:39.606317 | orchestrator | 2025-06-11 14:31:39.606328 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-11 14:31:39.606339 | orchestrator | Wednesday 11 June 2025 14:31:30 +0000 (0:00:00.997) 0:03:44.674 ******** 2025-06-11 14:31:39.606350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:31:39.606368 | orchestrator | 2025-06-11 14:31:39.606379 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-11 14:31:39.606389 | orchestrator | Wednesday 11 June 2025 14:31:31 +0000 (0:00:00.487) 0:03:45.162 ******** 2025-06-11 14:31:39.606400 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:31:39.606411 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:31:39.606421 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.606432 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.606443 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:31:39.606453 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:31:39.606464 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:39.606474 | orchestrator | 2025-06-11 14:31:39.606485 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-11 14:31:39.606496 | orchestrator | Wednesday 11 June 2025 14:31:38 +0000 (0:00:07.796) 0:03:52.958 ******** 2025-06-11 14:31:39.606507 | orchestrator | changed: [testbed-manager] 2025-06-11 14:31:39.606518 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:31:39.606528 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:31:39.606547 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.379894 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.380003 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.380018 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.380030 | orchestrator | 2025-06-11 14:32:43.380042 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-11 14:32:43.380055 | orchestrator | Wednesday 11 June 2025 14:31:39 +0000 (0:00:00.594) 0:03:53.553 ******** 2025-06-11 14:32:43.380066 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.380077 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.380088 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.380098 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.380109 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.380119 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.380130 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.380140 | orchestrator | 2025-06-11 14:32:43.380151 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-11 14:32:43.380162 | orchestrator | Wednesday 11 June 2025 14:31:40 +0000 (0:00:01.150) 0:03:54.703 ******** 2025-06-11 14:32:43.380173 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.380184 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.380195 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.380206 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.380216 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.380227 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.380237 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.380248 | orchestrator | 2025-06-11 14:32:43.380258 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-11 14:32:43.380269 | orchestrator | Wednesday 11 June 2025 14:31:41 +0000 (0:00:01.069) 0:03:55.773 ******** 2025-06-11 14:32:43.380280 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:43.380291 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:43.380302 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:43.380312 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:43.380323 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:43.380333 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:43.380344 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:43.380354 | orchestrator | 2025-06-11 14:32:43.380365 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-11 14:32:43.380377 | orchestrator | Wednesday 11 June 2025 14:31:42 +0000 (0:00:00.256) 0:03:56.030 ******** 2025-06-11 14:32:43.380388 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:43.380398 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:43.380409 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:43.380419 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:43.380431 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:43.380466 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:43.380478 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:43.380490 | orchestrator | 2025-06-11 14:32:43.380502 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-11 14:32:43.380514 | orchestrator | Wednesday 11 June 2025 14:31:42 +0000 (0:00:00.313) 0:03:56.343 ******** 2025-06-11 14:32:43.380526 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:43.380537 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:43.380547 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:43.380559 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:43.380569 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:43.380620 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:43.380641 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:43.380661 | orchestrator | 2025-06-11 14:32:43.380680 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-11 14:32:43.380698 | orchestrator | Wednesday 11 June 2025 14:31:42 +0000 (0:00:00.297) 0:03:56.641 ******** 2025-06-11 14:32:43.380717 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:43.380735 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:43.380754 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:43.380774 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:43.380789 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:43.380815 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:43.380826 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:43.380836 | orchestrator | 2025-06-11 14:32:43.380847 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-11 14:32:43.380858 | orchestrator | Wednesday 11 June 2025 14:31:48 +0000 (0:00:05.511) 0:04:02.152 ******** 2025-06-11 14:32:43.380870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:32:43.380884 | orchestrator | 2025-06-11 14:32:43.380894 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-11 14:32:43.380905 | orchestrator | Wednesday 11 June 2025 14:31:48 +0000 (0:00:00.389) 0:04:02.542 ******** 2025-06-11 14:32:43.380916 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.380926 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-11 14:32:43.380937 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.380948 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:43.380958 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-11 14:32:43.380969 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.380980 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-11 14:32:43.380990 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:43.381001 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.381011 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:43.381022 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-11 14:32:43.381032 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.381043 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-11 14:32:43.381053 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:43.381064 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.381074 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-11 14:32:43.381085 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:43.381115 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:43.381126 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-11 14:32:43.381137 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-11 14:32:43.381148 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:43.381158 | orchestrator | 2025-06-11 14:32:43.381178 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-11 14:32:43.381189 | orchestrator | Wednesday 11 June 2025 14:31:48 +0000 (0:00:00.345) 0:04:02.887 ******** 2025-06-11 14:32:43.381200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:32:43.381211 | orchestrator | 2025-06-11 14:32:43.381221 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-11 14:32:43.381232 | orchestrator | Wednesday 11 June 2025 14:31:49 +0000 (0:00:00.406) 0:04:03.293 ******** 2025-06-11 14:32:43.381242 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-11 14:32:43.381253 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-11 14:32:43.381263 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:43.381274 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:43.381285 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-11 14:32:43.381295 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-11 14:32:43.381312 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:43.381329 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-11 14:32:43.381340 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:43.381350 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:43.381361 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-11 14:32:43.381371 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:43.381382 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-11 14:32:43.381392 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:43.381403 | orchestrator | 2025-06-11 14:32:43.381413 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-11 14:32:43.381424 | orchestrator | Wednesday 11 June 2025 14:31:49 +0000 (0:00:00.350) 0:04:03.644 ******** 2025-06-11 14:32:43.381435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:32:43.381446 | orchestrator | 2025-06-11 14:32:43.381456 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-11 14:32:43.381467 | orchestrator | Wednesday 11 June 2025 14:31:50 +0000 (0:00:00.541) 0:04:04.185 ******** 2025-06-11 14:32:43.381477 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.381488 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.381499 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.381509 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.381519 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.381530 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.381540 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.381551 | orchestrator | 2025-06-11 14:32:43.381561 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-11 14:32:43.381572 | orchestrator | Wednesday 11 June 2025 14:32:22 +0000 (0:00:32.465) 0:04:36.650 ******** 2025-06-11 14:32:43.381582 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.381652 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.381674 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.381693 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.381713 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.381734 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.381753 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.381771 | orchestrator | 2025-06-11 14:32:43.381791 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-11 14:32:43.381812 | orchestrator | Wednesday 11 June 2025 14:32:29 +0000 (0:00:07.161) 0:04:43.812 ******** 2025-06-11 14:32:43.381844 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.381862 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.381872 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.381883 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.381893 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.381904 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.381915 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.381925 | orchestrator | 2025-06-11 14:32:43.381936 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-11 14:32:43.381947 | orchestrator | Wednesday 11 June 2025 14:32:36 +0000 (0:00:06.753) 0:04:50.566 ******** 2025-06-11 14:32:43.381958 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:43.381968 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:43.381978 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:43.381989 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:43.381999 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:43.382010 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:43.382086 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:43.382097 | orchestrator | 2025-06-11 14:32:43.382108 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-11 14:32:43.382119 | orchestrator | Wednesday 11 June 2025 14:32:38 +0000 (0:00:01.518) 0:04:52.084 ******** 2025-06-11 14:32:43.382130 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:43.382140 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:43.382151 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:43.382161 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:43.382172 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:43.382182 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:43.382193 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:43.382203 | orchestrator | 2025-06-11 14:32:43.382214 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-11 14:32:43.382235 | orchestrator | Wednesday 11 June 2025 14:32:43 +0000 (0:00:05.238) 0:04:57.323 ******** 2025-06-11 14:32:54.055090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:32:54.055174 | orchestrator | 2025-06-11 14:32:54.055188 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-11 14:32:54.055201 | orchestrator | Wednesday 11 June 2025 14:32:43 +0000 (0:00:00.422) 0:04:57.746 ******** 2025-06-11 14:32:54.055213 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:54.055225 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:54.055237 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:54.055248 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:54.055259 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:54.055271 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:54.055282 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:54.055293 | orchestrator | 2025-06-11 14:32:54.055305 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-11 14:32:54.055316 | orchestrator | Wednesday 11 June 2025 14:32:44 +0000 (0:00:00.693) 0:04:58.439 ******** 2025-06-11 14:32:54.055328 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:54.055339 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:54.055351 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:54.055362 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:54.055374 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:54.055386 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:54.055398 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:54.055410 | orchestrator | 2025-06-11 14:32:54.055422 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-11 14:32:54.055434 | orchestrator | Wednesday 11 June 2025 14:32:45 +0000 (0:00:01.516) 0:04:59.956 ******** 2025-06-11 14:32:54.055446 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:32:54.055480 | orchestrator | changed: [testbed-manager] 2025-06-11 14:32:54.055492 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:32:54.055505 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:32:54.055517 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:32:54.055529 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:32:54.055541 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:32:54.055553 | orchestrator | 2025-06-11 14:32:54.055565 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-11 14:32:54.055621 | orchestrator | Wednesday 11 June 2025 14:32:46 +0000 (0:00:00.785) 0:05:00.742 ******** 2025-06-11 14:32:54.055637 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.055650 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.055663 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.055677 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:54.055689 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:54.055703 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:54.055716 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:54.055729 | orchestrator | 2025-06-11 14:32:54.055742 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-11 14:32:54.055755 | orchestrator | Wednesday 11 June 2025 14:32:47 +0000 (0:00:00.323) 0:05:01.065 ******** 2025-06-11 14:32:54.055767 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.055780 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.055791 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.055802 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:54.055814 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:54.055826 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:54.055837 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:54.055848 | orchestrator | 2025-06-11 14:32:54.055860 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-11 14:32:54.055875 | orchestrator | Wednesday 11 June 2025 14:32:47 +0000 (0:00:00.432) 0:05:01.498 ******** 2025-06-11 14:32:54.055886 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:54.055897 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:54.055908 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:54.055919 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:54.055929 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:54.055940 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:54.055951 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:54.055962 | orchestrator | 2025-06-11 14:32:54.055973 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-11 14:32:54.055985 | orchestrator | Wednesday 11 June 2025 14:32:47 +0000 (0:00:00.278) 0:05:01.777 ******** 2025-06-11 14:32:54.055996 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.056008 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.056020 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.056031 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:54.056043 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:54.056055 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:54.056066 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:54.056078 | orchestrator | 2025-06-11 14:32:54.056090 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-11 14:32:54.056101 | orchestrator | Wednesday 11 June 2025 14:32:48 +0000 (0:00:00.269) 0:05:02.046 ******** 2025-06-11 14:32:54.056113 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:54.056125 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:54.056136 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:54.056148 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:54.056160 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:54.056171 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:54.056182 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:54.056193 | orchestrator | 2025-06-11 14:32:54.056206 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-11 14:32:54.056218 | orchestrator | Wednesday 11 June 2025 14:32:48 +0000 (0:00:00.332) 0:05:02.378 ******** 2025-06-11 14:32:54.056238 | orchestrator | ok: [testbed-manager] =>  2025-06-11 14:32:54.056249 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056260 | orchestrator | ok: [testbed-node-3] =>  2025-06-11 14:32:54.056272 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056283 | orchestrator | ok: [testbed-node-4] =>  2025-06-11 14:32:54.056295 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056306 | orchestrator | ok: [testbed-node-5] =>  2025-06-11 14:32:54.056317 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056328 | orchestrator | ok: [testbed-node-0] =>  2025-06-11 14:32:54.056339 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056369 | orchestrator | ok: [testbed-node-1] =>  2025-06-11 14:32:54.056382 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056394 | orchestrator | ok: [testbed-node-2] =>  2025-06-11 14:32:54.056405 | orchestrator |  docker_version: 5:27.5.1 2025-06-11 14:32:54.056416 | orchestrator | 2025-06-11 14:32:54.056428 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-11 14:32:54.056440 | orchestrator | Wednesday 11 June 2025 14:32:48 +0000 (0:00:00.285) 0:05:02.664 ******** 2025-06-11 14:32:54.056451 | orchestrator | ok: [testbed-manager] =>  2025-06-11 14:32:54.056462 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056473 | orchestrator | ok: [testbed-node-3] =>  2025-06-11 14:32:54.056484 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056495 | orchestrator | ok: [testbed-node-4] =>  2025-06-11 14:32:54.056506 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056517 | orchestrator | ok: [testbed-node-5] =>  2025-06-11 14:32:54.056528 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056539 | orchestrator | ok: [testbed-node-0] =>  2025-06-11 14:32:54.056550 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056560 | orchestrator | ok: [testbed-node-1] =>  2025-06-11 14:32:54.056571 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056582 | orchestrator | ok: [testbed-node-2] =>  2025-06-11 14:32:54.056606 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-11 14:32:54.056617 | orchestrator | 2025-06-11 14:32:54.056627 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-11 14:32:54.056638 | orchestrator | Wednesday 11 June 2025 14:32:49 +0000 (0:00:00.451) 0:05:03.116 ******** 2025-06-11 14:32:54.056648 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.056658 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.056668 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.056678 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:54.056688 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:54.056698 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:54.056708 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:54.056718 | orchestrator | 2025-06-11 14:32:54.056729 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-11 14:32:54.056739 | orchestrator | Wednesday 11 June 2025 14:32:49 +0000 (0:00:00.250) 0:05:03.366 ******** 2025-06-11 14:32:54.056749 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.056759 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.056769 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.056779 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:32:54.056790 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:32:54.056800 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:32:54.056810 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:32:54.056821 | orchestrator | 2025-06-11 14:32:54.056831 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-11 14:32:54.056841 | orchestrator | Wednesday 11 June 2025 14:32:49 +0000 (0:00:00.276) 0:05:03.643 ******** 2025-06-11 14:32:54.056853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:32:54.056873 | orchestrator | 2025-06-11 14:32:54.056884 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-11 14:32:54.056896 | orchestrator | Wednesday 11 June 2025 14:32:50 +0000 (0:00:00.451) 0:05:04.094 ******** 2025-06-11 14:32:54.056907 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:54.056918 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:54.056929 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:54.056940 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:54.056956 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:54.056968 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:54.056979 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:54.056990 | orchestrator | 2025-06-11 14:32:54.057002 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-11 14:32:54.057013 | orchestrator | Wednesday 11 June 2025 14:32:50 +0000 (0:00:00.800) 0:05:04.894 ******** 2025-06-11 14:32:54.057025 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:32:54.057036 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:32:54.057048 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:32:54.057059 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:32:54.057070 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:32:54.057081 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:32:54.057093 | orchestrator | ok: [testbed-manager] 2025-06-11 14:32:54.057104 | orchestrator | 2025-06-11 14:32:54.057115 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-11 14:32:54.057128 | orchestrator | Wednesday 11 June 2025 14:32:53 +0000 (0:00:02.674) 0:05:07.569 ******** 2025-06-11 14:32:54.057139 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-11 14:32:54.057151 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-11 14:32:54.057162 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-11 14:32:54.057173 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-11 14:32:54.057184 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-11 14:32:54.057196 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-11 14:32:54.057206 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:32:54.057217 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-11 14:32:54.057228 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-11 14:32:54.057239 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-11 14:32:54.057250 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:32:54.057260 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-11 14:32:54.057271 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-11 14:32:54.057281 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-11 14:32:54.057291 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:32:54.057301 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-11 14:32:54.057311 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-11 14:32:54.057327 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-11 14:33:48.747492 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:33:48.747654 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-11 14:33:48.747672 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-11 14:33:48.747684 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-11 14:33:48.747695 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:33:48.747706 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:33:48.747717 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-11 14:33:48.747728 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-11 14:33:48.747738 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-11 14:33:48.747749 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:33:48.747760 | orchestrator | 2025-06-11 14:33:48.747799 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-11 14:33:48.747813 | orchestrator | Wednesday 11 June 2025 14:32:54 +0000 (0:00:00.614) 0:05:08.183 ******** 2025-06-11 14:33:48.747824 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.747834 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.747845 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.747855 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.747866 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.747876 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.747887 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.747897 | orchestrator | 2025-06-11 14:33:48.747908 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-11 14:33:48.747919 | orchestrator | Wednesday 11 June 2025 14:32:59 +0000 (0:00:05.665) 0:05:13.849 ******** 2025-06-11 14:33:48.747930 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.747940 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.747951 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.747961 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.747972 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.747982 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.747993 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748003 | orchestrator | 2025-06-11 14:33:48.748014 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-11 14:33:48.748026 | orchestrator | Wednesday 11 June 2025 14:33:00 +0000 (0:00:01.009) 0:05:14.859 ******** 2025-06-11 14:33:48.748038 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.748051 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748063 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748075 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748087 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748099 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748111 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748123 | orchestrator | 2025-06-11 14:33:48.748135 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-11 14:33:48.748147 | orchestrator | Wednesday 11 June 2025 14:33:07 +0000 (0:00:06.857) 0:05:21.716 ******** 2025-06-11 14:33:48.748159 | orchestrator | changed: [testbed-manager] 2025-06-11 14:33:48.748171 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748183 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748195 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748206 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748218 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748230 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748242 | orchestrator | 2025-06-11 14:33:48.748255 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-11 14:33:48.748267 | orchestrator | Wednesday 11 June 2025 14:33:10 +0000 (0:00:03.063) 0:05:24.780 ******** 2025-06-11 14:33:48.748279 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.748306 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748318 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748331 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748343 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748355 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748367 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748379 | orchestrator | 2025-06-11 14:33:48.748389 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-11 14:33:48.748400 | orchestrator | Wednesday 11 June 2025 14:33:12 +0000 (0:00:01.529) 0:05:26.309 ******** 2025-06-11 14:33:48.748411 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.748421 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748432 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748442 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748453 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748471 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748482 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748492 | orchestrator | 2025-06-11 14:33:48.748503 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-11 14:33:48.748514 | orchestrator | Wednesday 11 June 2025 14:33:13 +0000 (0:00:01.367) 0:05:27.676 ******** 2025-06-11 14:33:48.748525 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:33:48.748535 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:33:48.748546 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:33:48.748556 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:33:48.748597 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:33:48.748609 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:33:48.748620 | orchestrator | changed: [testbed-manager] 2025-06-11 14:33:48.748630 | orchestrator | 2025-06-11 14:33:48.748641 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-11 14:33:48.748652 | orchestrator | Wednesday 11 June 2025 14:33:14 +0000 (0:00:00.570) 0:05:28.246 ******** 2025-06-11 14:33:48.748662 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.748673 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748683 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748694 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748704 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748715 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748725 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748736 | orchestrator | 2025-06-11 14:33:48.748747 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-11 14:33:48.748758 | orchestrator | Wednesday 11 June 2025 14:33:23 +0000 (0:00:08.883) 0:05:37.130 ******** 2025-06-11 14:33:48.748768 | orchestrator | changed: [testbed-manager] 2025-06-11 14:33:48.748796 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748808 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748818 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748829 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748839 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748850 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748861 | orchestrator | 2025-06-11 14:33:48.748871 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-11 14:33:48.748882 | orchestrator | Wednesday 11 June 2025 14:33:24 +0000 (0:00:00.877) 0:05:38.007 ******** 2025-06-11 14:33:48.748893 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.748903 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.748914 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.748924 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.748935 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.748946 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.748956 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.748966 | orchestrator | 2025-06-11 14:33:48.748977 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-11 14:33:48.748988 | orchestrator | Wednesday 11 June 2025 14:33:32 +0000 (0:00:08.462) 0:05:46.470 ******** 2025-06-11 14:33:48.748998 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.749009 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.749019 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.749030 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.749041 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.749051 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.749062 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.749073 | orchestrator | 2025-06-11 14:33:48.749083 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-11 14:33:48.749094 | orchestrator | Wednesday 11 June 2025 14:33:42 +0000 (0:00:09.911) 0:05:56.381 ******** 2025-06-11 14:33:48.749105 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-11 14:33:48.749116 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-11 14:33:48.749134 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-11 14:33:48.749145 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-11 14:33:48.749155 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-11 14:33:48.749166 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-11 14:33:48.749176 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-11 14:33:48.749187 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-11 14:33:48.749198 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-11 14:33:48.749208 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-11 14:33:48.749219 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-11 14:33:48.749229 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-11 14:33:48.749240 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-11 14:33:48.749250 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-11 14:33:48.749261 | orchestrator | 2025-06-11 14:33:48.749272 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-11 14:33:48.749283 | orchestrator | Wednesday 11 June 2025 14:33:43 +0000 (0:00:01.237) 0:05:57.619 ******** 2025-06-11 14:33:48.749293 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:33:48.749304 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:33:48.749314 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:33:48.749325 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:33:48.749335 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:33:48.749346 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:33:48.749362 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:33:48.749373 | orchestrator | 2025-06-11 14:33:48.749384 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-11 14:33:48.749394 | orchestrator | Wednesday 11 June 2025 14:33:44 +0000 (0:00:00.554) 0:05:58.173 ******** 2025-06-11 14:33:48.749405 | orchestrator | ok: [testbed-manager] 2025-06-11 14:33:48.749416 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:33:48.749427 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:33:48.749437 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:33:48.749447 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:33:48.749458 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:33:48.749468 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:33:48.749479 | orchestrator | 2025-06-11 14:33:48.749490 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-11 14:33:48.749502 | orchestrator | Wednesday 11 June 2025 14:33:47 +0000 (0:00:03.735) 0:06:01.908 ******** 2025-06-11 14:33:48.749512 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:33:48.749523 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:33:48.749533 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:33:48.749544 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:33:48.749554 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:33:48.749565 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:33:48.749592 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:33:48.749603 | orchestrator | 2025-06-11 14:33:48.749614 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-11 14:33:48.749625 | orchestrator | Wednesday 11 June 2025 14:33:48 +0000 (0:00:00.503) 0:06:02.412 ******** 2025-06-11 14:33:48.749636 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-11 14:33:48.749647 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-11 14:33:48.749657 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:33:48.749668 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-11 14:33:48.749679 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-11 14:33:48.749689 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:33:48.749700 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-11 14:33:48.749710 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-11 14:33:48.749728 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:33:48.749739 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-11 14:33:48.749756 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-11 14:34:07.525802 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:07.525884 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-11 14:34:07.525891 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-11 14:34:07.525895 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:07.525900 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-11 14:34:07.525904 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-11 14:34:07.525908 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:07.525911 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-11 14:34:07.525915 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-11 14:34:07.525919 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:07.525923 | orchestrator | 2025-06-11 14:34:07.525928 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-11 14:34:07.525933 | orchestrator | Wednesday 11 June 2025 14:33:48 +0000 (0:00:00.543) 0:06:02.955 ******** 2025-06-11 14:34:07.525937 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:07.525941 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:07.525944 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:07.525948 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:07.525952 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:07.525955 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:07.525959 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:07.525963 | orchestrator | 2025-06-11 14:34:07.525967 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-11 14:34:07.525970 | orchestrator | Wednesday 11 June 2025 14:33:49 +0000 (0:00:00.528) 0:06:03.484 ******** 2025-06-11 14:34:07.525975 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:07.525978 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:07.525982 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:07.525986 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:07.525990 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:07.525993 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:07.525997 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:07.526000 | orchestrator | 2025-06-11 14:34:07.526004 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-11 14:34:07.526008 | orchestrator | Wednesday 11 June 2025 14:33:50 +0000 (0:00:00.504) 0:06:03.988 ******** 2025-06-11 14:34:07.526012 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:07.526053 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:07.526057 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:07.526061 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:07.526065 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:07.526069 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:07.526073 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:07.526077 | orchestrator | 2025-06-11 14:34:07.526081 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-11 14:34:07.526085 | orchestrator | Wednesday 11 June 2025 14:33:50 +0000 (0:00:00.750) 0:06:04.739 ******** 2025-06-11 14:34:07.526089 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526094 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526098 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526101 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526105 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526109 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526113 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526117 | orchestrator | 2025-06-11 14:34:07.526121 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-11 14:34:07.526142 | orchestrator | Wednesday 11 June 2025 14:33:52 +0000 (0:00:01.738) 0:06:06.478 ******** 2025-06-11 14:34:07.526148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:34:07.526153 | orchestrator | 2025-06-11 14:34:07.526158 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-11 14:34:07.526162 | orchestrator | Wednesday 11 June 2025 14:33:53 +0000 (0:00:00.857) 0:06:07.335 ******** 2025-06-11 14:34:07.526166 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526170 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:07.526174 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:07.526178 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:07.526182 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:07.526186 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:07.526190 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:07.526194 | orchestrator | 2025-06-11 14:34:07.526198 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-11 14:34:07.526202 | orchestrator | Wednesday 11 June 2025 14:33:54 +0000 (0:00:00.805) 0:06:08.140 ******** 2025-06-11 14:34:07.526206 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526210 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:07.526214 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:07.526218 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:07.526222 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:07.526226 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:07.526230 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:07.526234 | orchestrator | 2025-06-11 14:34:07.526238 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-11 14:34:07.526242 | orchestrator | Wednesday 11 June 2025 14:33:55 +0000 (0:00:01.107) 0:06:09.248 ******** 2025-06-11 14:34:07.526260 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526264 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:07.526268 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:07.526272 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:07.526276 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:07.526280 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:07.526284 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:07.526288 | orchestrator | 2025-06-11 14:34:07.526292 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-11 14:34:07.526296 | orchestrator | Wednesday 11 June 2025 14:33:56 +0000 (0:00:01.350) 0:06:10.599 ******** 2025-06-11 14:34:07.526311 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:07.526316 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526320 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526324 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526328 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526332 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526336 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526340 | orchestrator | 2025-06-11 14:34:07.526344 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-11 14:34:07.526348 | orchestrator | Wednesday 11 June 2025 14:33:57 +0000 (0:00:01.288) 0:06:11.887 ******** 2025-06-11 14:34:07.526353 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526357 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:07.526361 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:07.526365 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:07.526370 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:07.526375 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:07.526380 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:07.526385 | orchestrator | 2025-06-11 14:34:07.526389 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-11 14:34:07.526398 | orchestrator | Wednesday 11 June 2025 14:33:59 +0000 (0:00:01.188) 0:06:13.076 ******** 2025-06-11 14:34:07.526403 | orchestrator | changed: [testbed-manager] 2025-06-11 14:34:07.526407 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:07.526412 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:07.526417 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:07.526421 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:07.526426 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:07.526431 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:07.526435 | orchestrator | 2025-06-11 14:34:07.526440 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-11 14:34:07.526444 | orchestrator | Wednesday 11 June 2025 14:34:00 +0000 (0:00:01.310) 0:06:14.387 ******** 2025-06-11 14:34:07.526449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:34:07.526454 | orchestrator | 2025-06-11 14:34:07.526459 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-11 14:34:07.526463 | orchestrator | Wednesday 11 June 2025 14:34:01 +0000 (0:00:01.003) 0:06:15.390 ******** 2025-06-11 14:34:07.526468 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526472 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526477 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526481 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526486 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526490 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526495 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526499 | orchestrator | 2025-06-11 14:34:07.526504 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-11 14:34:07.526508 | orchestrator | Wednesday 11 June 2025 14:34:02 +0000 (0:00:01.337) 0:06:16.728 ******** 2025-06-11 14:34:07.526513 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526517 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526522 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526526 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526531 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526535 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526540 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526545 | orchestrator | 2025-06-11 14:34:07.526549 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-11 14:34:07.526554 | orchestrator | Wednesday 11 June 2025 14:34:03 +0000 (0:00:01.105) 0:06:17.833 ******** 2025-06-11 14:34:07.526575 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526582 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526588 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526595 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526602 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526609 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526614 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526618 | orchestrator | 2025-06-11 14:34:07.526622 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-11 14:34:07.526627 | orchestrator | Wednesday 11 June 2025 14:34:05 +0000 (0:00:01.395) 0:06:19.229 ******** 2025-06-11 14:34:07.526631 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:07.526635 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:07.526639 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:07.526643 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:07.526647 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:07.526651 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:07.526656 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:07.526660 | orchestrator | 2025-06-11 14:34:07.526664 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-11 14:34:07.526668 | orchestrator | Wednesday 11 June 2025 14:34:06 +0000 (0:00:01.094) 0:06:20.324 ******** 2025-06-11 14:34:07.526673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:34:07.526681 | orchestrator | 2025-06-11 14:34:07.526685 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:07.526690 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.854) 0:06:21.178 ******** 2025-06-11 14:34:07.526694 | orchestrator | 2025-06-11 14:34:07.526698 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:07.526703 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.039) 0:06:21.218 ******** 2025-06-11 14:34:07.526707 | orchestrator | 2025-06-11 14:34:07.526711 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:07.526715 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.037) 0:06:21.256 ******** 2025-06-11 14:34:07.526720 | orchestrator | 2025-06-11 14:34:07.526724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:07.526728 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.044) 0:06:21.300 ******** 2025-06-11 14:34:07.526731 | orchestrator | 2025-06-11 14:34:07.526738 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:33.084541 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.038) 0:06:21.339 ******** 2025-06-11 14:34:33.084752 | orchestrator | 2025-06-11 14:34:33.084773 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:33.084786 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.038) 0:06:21.377 ******** 2025-06-11 14:34:33.084797 | orchestrator | 2025-06-11 14:34:33.084809 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-11 14:34:33.084820 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.045) 0:06:21.423 ******** 2025-06-11 14:34:33.084831 | orchestrator | 2025-06-11 14:34:33.084842 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-11 14:34:33.084853 | orchestrator | Wednesday 11 June 2025 14:34:07 +0000 (0:00:00.038) 0:06:21.461 ******** 2025-06-11 14:34:33.084864 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:33.084876 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:33.084887 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:33.084898 | orchestrator | 2025-06-11 14:34:33.084909 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-11 14:34:33.084921 | orchestrator | Wednesday 11 June 2025 14:34:08 +0000 (0:00:01.298) 0:06:22.760 ******** 2025-06-11 14:34:33.084931 | orchestrator | changed: [testbed-manager] 2025-06-11 14:34:33.084943 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:33.084984 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:33.084996 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:33.085007 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:33.085018 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:33.085029 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:33.085040 | orchestrator | 2025-06-11 14:34:33.085051 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-11 14:34:33.085062 | orchestrator | Wednesday 11 June 2025 14:34:10 +0000 (0:00:01.309) 0:06:24.070 ******** 2025-06-11 14:34:33.085075 | orchestrator | changed: [testbed-manager] 2025-06-11 14:34:33.085087 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:33.085099 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:33.085110 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:33.085122 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:33.085135 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:33.085147 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:33.085159 | orchestrator | 2025-06-11 14:34:33.085171 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-11 14:34:33.085184 | orchestrator | Wednesday 11 June 2025 14:34:11 +0000 (0:00:01.107) 0:06:25.177 ******** 2025-06-11 14:34:33.085221 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:33.085233 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:33.085246 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:33.085258 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:33.085269 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:33.085281 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:33.085293 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:33.085306 | orchestrator | 2025-06-11 14:34:33.085318 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-11 14:34:33.085331 | orchestrator | Wednesday 11 June 2025 14:34:13 +0000 (0:00:02.130) 0:06:27.307 ******** 2025-06-11 14:34:33.085343 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:33.085355 | orchestrator | 2025-06-11 14:34:33.085368 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-11 14:34:33.085380 | orchestrator | Wednesday 11 June 2025 14:34:13 +0000 (0:00:00.106) 0:06:27.414 ******** 2025-06-11 14:34:33.085392 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.085402 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:33.085428 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:33.085439 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:33.085449 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:33.085460 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:33.085470 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:33.085481 | orchestrator | 2025-06-11 14:34:33.085492 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-11 14:34:33.085503 | orchestrator | Wednesday 11 June 2025 14:34:14 +0000 (0:00:00.957) 0:06:28.372 ******** 2025-06-11 14:34:33.085514 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:33.085524 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:33.085535 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:33.085545 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:33.085585 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:33.085604 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:33.085623 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:33.085635 | orchestrator | 2025-06-11 14:34:33.085646 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-11 14:34:33.085657 | orchestrator | Wednesday 11 June 2025 14:34:15 +0000 (0:00:00.704) 0:06:29.076 ******** 2025-06-11 14:34:33.085669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:34:33.085682 | orchestrator | 2025-06-11 14:34:33.085693 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-11 14:34:33.085704 | orchestrator | Wednesday 11 June 2025 14:34:16 +0000 (0:00:00.898) 0:06:29.975 ******** 2025-06-11 14:34:33.085714 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.085725 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:33.085736 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:33.085746 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:33.085757 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:33.085767 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:33.085777 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:33.085788 | orchestrator | 2025-06-11 14:34:33.085798 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-11 14:34:33.085809 | orchestrator | Wednesday 11 June 2025 14:34:16 +0000 (0:00:00.796) 0:06:30.772 ******** 2025-06-11 14:34:33.085820 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-11 14:34:33.085831 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-11 14:34:33.085860 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-11 14:34:33.085872 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-11 14:34:33.085883 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-11 14:34:33.085908 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-11 14:34:33.085920 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-11 14:34:33.085930 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-11 14:34:33.085941 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-11 14:34:33.085951 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-11 14:34:33.085962 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-11 14:34:33.085972 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-11 14:34:33.085983 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-11 14:34:33.085994 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-11 14:34:33.086004 | orchestrator | 2025-06-11 14:34:33.086015 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-11 14:34:33.086087 | orchestrator | Wednesday 11 June 2025 14:34:19 +0000 (0:00:02.713) 0:06:33.485 ******** 2025-06-11 14:34:33.086098 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:33.086109 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:33.086119 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:33.086130 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:33.086140 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:33.086151 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:33.086161 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:33.086172 | orchestrator | 2025-06-11 14:34:33.086182 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-11 14:34:33.086193 | orchestrator | Wednesday 11 June 2025 14:34:20 +0000 (0:00:00.539) 0:06:34.025 ******** 2025-06-11 14:34:33.086206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:34:33.086219 | orchestrator | 2025-06-11 14:34:33.086230 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-11 14:34:33.086241 | orchestrator | Wednesday 11 June 2025 14:34:20 +0000 (0:00:00.768) 0:06:34.793 ******** 2025-06-11 14:34:33.086251 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.086262 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:33.086272 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:33.086283 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:33.086294 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:33.086304 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:33.086315 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:33.086325 | orchestrator | 2025-06-11 14:34:33.086336 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-11 14:34:33.086347 | orchestrator | Wednesday 11 June 2025 14:34:21 +0000 (0:00:00.996) 0:06:35.790 ******** 2025-06-11 14:34:33.086357 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.086368 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:33.086379 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:33.086389 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:33.086400 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:33.086410 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:33.086421 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:33.086431 | orchestrator | 2025-06-11 14:34:33.086449 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-11 14:34:33.086460 | orchestrator | Wednesday 11 June 2025 14:34:22 +0000 (0:00:00.781) 0:06:36.572 ******** 2025-06-11 14:34:33.086471 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:33.086482 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:33.086492 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:33.086503 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:33.086514 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:33.086524 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:33.086543 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:33.086610 | orchestrator | 2025-06-11 14:34:33.086624 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-11 14:34:33.086635 | orchestrator | Wednesday 11 June 2025 14:34:23 +0000 (0:00:00.498) 0:06:37.071 ******** 2025-06-11 14:34:33.086646 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.086656 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:34:33.086667 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:34:33.086678 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:34:33.086688 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:34:33.086699 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:34:33.086709 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:34:33.086720 | orchestrator | 2025-06-11 14:34:33.086730 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-11 14:34:33.086741 | orchestrator | Wednesday 11 June 2025 14:34:24 +0000 (0:00:01.523) 0:06:38.594 ******** 2025-06-11 14:34:33.086752 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:34:33.086763 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:34:33.086773 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:34:33.086784 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:34:33.086794 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:34:33.086805 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:34:33.086815 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:34:33.086826 | orchestrator | 2025-06-11 14:34:33.086836 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-11 14:34:33.086847 | orchestrator | Wednesday 11 June 2025 14:34:25 +0000 (0:00:00.526) 0:06:39.121 ******** 2025-06-11 14:34:33.086898 | orchestrator | ok: [testbed-manager] 2025-06-11 14:34:33.086909 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:34:33.086920 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:34:33.086931 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:34:33.086941 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:34:33.086952 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:34:33.086963 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:34:33.086974 | orchestrator | 2025-06-11 14:34:33.086994 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-11 14:35:04.239806 | orchestrator | Wednesday 11 June 2025 14:34:33 +0000 (0:00:07.891) 0:06:47.013 ******** 2025-06-11 14:35:04.239949 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.239976 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:04.239996 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:04.240014 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:04.240032 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:04.240051 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:04.240071 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:04.240091 | orchestrator | 2025-06-11 14:35:04.240114 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-11 14:35:04.240134 | orchestrator | Wednesday 11 June 2025 14:34:34 +0000 (0:00:01.358) 0:06:48.371 ******** 2025-06-11 14:35:04.240154 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.240174 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:04.240194 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:04.240213 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:04.240233 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:04.240254 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:04.240274 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:04.240292 | orchestrator | 2025-06-11 14:35:04.240315 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-11 14:35:04.240338 | orchestrator | Wednesday 11 June 2025 14:34:36 +0000 (0:00:01.694) 0:06:50.066 ******** 2025-06-11 14:35:04.240360 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.240383 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:04.240404 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:04.240426 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:04.240484 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:04.240508 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:04.240527 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:04.240578 | orchestrator | 2025-06-11 14:35:04.240598 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-11 14:35:04.240617 | orchestrator | Wednesday 11 June 2025 14:34:37 +0000 (0:00:01.607) 0:06:51.673 ******** 2025-06-11 14:35:04.240635 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.240653 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.240671 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.240688 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.240706 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.240723 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.240741 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.240758 | orchestrator | 2025-06-11 14:35:04.240777 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-11 14:35:04.240794 | orchestrator | Wednesday 11 June 2025 14:34:38 +0000 (0:00:01.092) 0:06:52.766 ******** 2025-06-11 14:35:04.240812 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:04.240830 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:04.240848 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:04.240866 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:04.240885 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:04.240905 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:04.240925 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:04.240944 | orchestrator | 2025-06-11 14:35:04.240961 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-11 14:35:04.240979 | orchestrator | Wednesday 11 June 2025 14:34:39 +0000 (0:00:00.849) 0:06:53.615 ******** 2025-06-11 14:35:04.240999 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:04.241017 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:04.241036 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:04.241055 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:04.241074 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:04.241105 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:04.241116 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:04.241127 | orchestrator | 2025-06-11 14:35:04.241139 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-11 14:35:04.241149 | orchestrator | Wednesday 11 June 2025 14:34:40 +0000 (0:00:00.519) 0:06:54.135 ******** 2025-06-11 14:35:04.241161 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241172 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241183 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241194 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241205 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241215 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.241226 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.241237 | orchestrator | 2025-06-11 14:35:04.241248 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-11 14:35:04.241259 | orchestrator | Wednesday 11 June 2025 14:34:40 +0000 (0:00:00.718) 0:06:54.853 ******** 2025-06-11 14:35:04.241270 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241280 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241291 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241302 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241312 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241323 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.241333 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.241344 | orchestrator | 2025-06-11 14:35:04.241355 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-11 14:35:04.241366 | orchestrator | Wednesday 11 June 2025 14:34:41 +0000 (0:00:00.534) 0:06:55.387 ******** 2025-06-11 14:35:04.241377 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241388 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241398 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241422 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241433 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241443 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.241454 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.241465 | orchestrator | 2025-06-11 14:35:04.241476 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-11 14:35:04.241487 | orchestrator | Wednesday 11 June 2025 14:34:41 +0000 (0:00:00.555) 0:06:55.942 ******** 2025-06-11 14:35:04.241497 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241508 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241519 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241530 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241540 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241599 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.241609 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.241620 | orchestrator | 2025-06-11 14:35:04.241631 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-11 14:35:04.241667 | orchestrator | Wednesday 11 June 2025 14:34:47 +0000 (0:00:05.461) 0:07:01.404 ******** 2025-06-11 14:35:04.241679 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:04.241690 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:04.241701 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:04.241712 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:04.241723 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:04.241733 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:04.241744 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:04.241755 | orchestrator | 2025-06-11 14:35:04.241765 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-11 14:35:04.241776 | orchestrator | Wednesday 11 June 2025 14:34:47 +0000 (0:00:00.492) 0:07:01.897 ******** 2025-06-11 14:35:04.241790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:35:04.241804 | orchestrator | 2025-06-11 14:35:04.241816 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-11 14:35:04.241827 | orchestrator | Wednesday 11 June 2025 14:34:48 +0000 (0:00:00.972) 0:07:02.869 ******** 2025-06-11 14:35:04.241837 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241848 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241859 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241869 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241880 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241891 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.241901 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.241912 | orchestrator | 2025-06-11 14:35:04.241923 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-11 14:35:04.241934 | orchestrator | Wednesday 11 June 2025 14:34:50 +0000 (0:00:01.832) 0:07:04.702 ******** 2025-06-11 14:35:04.241945 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.241955 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.241966 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.241977 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.241987 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.241998 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.242008 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.242090 | orchestrator | 2025-06-11 14:35:04.242106 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-11 14:35:04.242117 | orchestrator | Wednesday 11 June 2025 14:34:51 +0000 (0:00:01.220) 0:07:05.923 ******** 2025-06-11 14:35:04.242128 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:04.242138 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:04.242149 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:04.242160 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:04.242171 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:04.242190 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:04.242201 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:04.242212 | orchestrator | 2025-06-11 14:35:04.242223 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-11 14:35:04.242234 | orchestrator | Wednesday 11 June 2025 14:34:53 +0000 (0:00:01.120) 0:07:07.043 ******** 2025-06-11 14:35:04.242245 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242258 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242270 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242281 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242292 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242303 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242314 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-11 14:35:04.242325 | orchestrator | 2025-06-11 14:35:04.242336 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-11 14:35:04.242347 | orchestrator | Wednesday 11 June 2025 14:34:54 +0000 (0:00:01.725) 0:07:08.769 ******** 2025-06-11 14:35:04.242358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:35:04.242442 | orchestrator | 2025-06-11 14:35:04.242454 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-11 14:35:04.242465 | orchestrator | Wednesday 11 June 2025 14:34:55 +0000 (0:00:00.796) 0:07:09.565 ******** 2025-06-11 14:35:04.242476 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:04.242487 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:04.242497 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:04.242508 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:04.242519 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:04.242632 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:04.242649 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:04.242660 | orchestrator | 2025-06-11 14:35:04.242671 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-11 14:35:04.242694 | orchestrator | Wednesday 11 June 2025 14:35:04 +0000 (0:00:08.616) 0:07:18.181 ******** 2025-06-11 14:35:20.506054 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:20.506153 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:20.506161 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:20.506167 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:20.506172 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:20.506215 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:20.506221 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:20.506227 | orchestrator | 2025-06-11 14:35:20.506234 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-11 14:35:20.506241 | orchestrator | Wednesday 11 June 2025 14:35:05 +0000 (0:00:01.733) 0:07:19.915 ******** 2025-06-11 14:35:20.506246 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:20.506251 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:20.506256 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:20.506261 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:20.506266 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:20.506288 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:20.506294 | orchestrator | 2025-06-11 14:35:20.506300 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-11 14:35:20.506305 | orchestrator | Wednesday 11 June 2025 14:35:07 +0000 (0:00:01.265) 0:07:21.180 ******** 2025-06-11 14:35:20.506310 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:20.506316 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:20.506322 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:20.506327 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:20.506332 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:20.506337 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:20.506342 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:20.506347 | orchestrator | 2025-06-11 14:35:20.506352 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-11 14:35:20.506357 | orchestrator | 2025-06-11 14:35:20.506362 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-11 14:35:20.506368 | orchestrator | Wednesday 11 June 2025 14:35:08 +0000 (0:00:01.508) 0:07:22.689 ******** 2025-06-11 14:35:20.506373 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:20.506378 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:20.506383 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:20.506388 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:20.506393 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:20.506398 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:20.506403 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:20.506408 | orchestrator | 2025-06-11 14:35:20.506413 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-11 14:35:20.506418 | orchestrator | 2025-06-11 14:35:20.506423 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-11 14:35:20.506428 | orchestrator | Wednesday 11 June 2025 14:35:09 +0000 (0:00:00.524) 0:07:23.213 ******** 2025-06-11 14:35:20.506433 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:20.506438 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:20.506443 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:20.506448 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:20.506453 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:20.506458 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:20.506463 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:20.506468 | orchestrator | 2025-06-11 14:35:20.506473 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-11 14:35:20.506478 | orchestrator | Wednesday 11 June 2025 14:35:10 +0000 (0:00:01.332) 0:07:24.545 ******** 2025-06-11 14:35:20.506484 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:20.506489 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:20.506494 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:20.506499 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:20.506504 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:20.506520 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:20.506526 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:20.506531 | orchestrator | 2025-06-11 14:35:20.506569 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-11 14:35:20.506576 | orchestrator | Wednesday 11 June 2025 14:35:12 +0000 (0:00:01.440) 0:07:25.986 ******** 2025-06-11 14:35:20.506582 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:20.506588 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:20.506593 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:20.506600 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:20.506605 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:20.506619 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:20.506624 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:20.506630 | orchestrator | 2025-06-11 14:35:20.506636 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-11 14:35:20.506648 | orchestrator | Wednesday 11 June 2025 14:35:13 +0000 (0:00:01.045) 0:07:27.031 ******** 2025-06-11 14:35:20.506659 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:20.506664 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:20.506672 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:20.506681 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:20.506689 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:20.506698 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:20.506707 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:20.506716 | orchestrator | 2025-06-11 14:35:20.506724 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-11 14:35:20.506734 | orchestrator | 2025-06-11 14:35:20.506739 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-11 14:35:20.506745 | orchestrator | Wednesday 11 June 2025 14:35:14 +0000 (0:00:01.234) 0:07:28.266 ******** 2025-06-11 14:35:20.506751 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:35:20.506758 | orchestrator | 2025-06-11 14:35:20.506764 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-11 14:35:20.506769 | orchestrator | Wednesday 11 June 2025 14:35:15 +0000 (0:00:00.991) 0:07:29.257 ******** 2025-06-11 14:35:20.506775 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:20.506781 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:20.506786 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:20.506792 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:20.506798 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:20.506804 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:20.506809 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:20.506815 | orchestrator | 2025-06-11 14:35:20.506834 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-11 14:35:20.506840 | orchestrator | Wednesday 11 June 2025 14:35:16 +0000 (0:00:00.837) 0:07:30.095 ******** 2025-06-11 14:35:20.506846 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:20.506851 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:20.506857 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:20.506863 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:20.506868 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:20.506874 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:20.506879 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:20.506885 | orchestrator | 2025-06-11 14:35:20.506890 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-11 14:35:20.506896 | orchestrator | Wednesday 11 June 2025 14:35:17 +0000 (0:00:01.152) 0:07:31.247 ******** 2025-06-11 14:35:20.506902 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:35:20.506908 | orchestrator | 2025-06-11 14:35:20.506914 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-11 14:35:20.506919 | orchestrator | Wednesday 11 June 2025 14:35:18 +0000 (0:00:01.119) 0:07:32.367 ******** 2025-06-11 14:35:20.506925 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:20.506931 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:20.506937 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:20.506942 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:20.506947 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:20.506952 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:20.506957 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:20.506962 | orchestrator | 2025-06-11 14:35:20.506967 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-11 14:35:20.506972 | orchestrator | Wednesday 11 June 2025 14:35:19 +0000 (0:00:00.886) 0:07:33.253 ******** 2025-06-11 14:35:20.506977 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:20.506982 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:20.506987 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:20.506992 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:20.507002 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:20.507007 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:20.507012 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:20.507017 | orchestrator | 2025-06-11 14:35:20.507022 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:35:20.507028 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-11 14:35:20.507034 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-11 14:35:20.507040 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-11 14:35:20.507045 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-11 14:35:20.507050 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-11 14:35:20.507055 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-11 14:35:20.507061 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-11 14:35:20.507070 | orchestrator | 2025-06-11 14:35:20.507079 | orchestrator | 2025-06-11 14:35:20.507086 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:35:20.507094 | orchestrator | Wednesday 11 June 2025 14:35:20 +0000 (0:00:01.186) 0:07:34.440 ******** 2025-06-11 14:35:20.507102 | orchestrator | =============================================================================== 2025-06-11 14:35:20.507110 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.27s 2025-06-11 14:35:20.507117 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.56s 2025-06-11 14:35:20.507125 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.47s 2025-06-11 14:35:20.507133 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.77s 2025-06-11 14:35:20.507140 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.68s 2025-06-11 14:35:20.507149 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.23s 2025-06-11 14:35:20.507158 | orchestrator | osism.services.docker : Install docker package -------------------------- 9.91s 2025-06-11 14:35:20.507166 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.88s 2025-06-11 14:35:20.507173 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.62s 2025-06-11 14:35:20.507180 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.46s 2025-06-11 14:35:20.507187 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.89s 2025-06-11 14:35:20.507194 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.80s 2025-06-11 14:35:20.507201 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.35s 2025-06-11 14:35:20.507208 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.16s 2025-06-11 14:35:20.507221 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.86s 2025-06-11 14:35:21.189689 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.75s 2025-06-11 14:35:21.189798 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.67s 2025-06-11 14:35:21.189812 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.52s 2025-06-11 14:35:21.189824 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.51s 2025-06-11 14:35:21.189862 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.46s 2025-06-11 14:35:21.505234 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-11 14:35:21.505336 | orchestrator | + osism apply network 2025-06-11 14:35:23.565961 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:35:23.566130 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:35:23.566158 | orchestrator | Registering Redlock._release_script 2025-06-11 14:35:23.623149 | orchestrator | 2025-06-11 14:35:23 | INFO  | Task 98435e2d-fbc3-4ac3-8dce-4dbcdf7780f8 (network) was prepared for execution. 2025-06-11 14:35:23.623230 | orchestrator | 2025-06-11 14:35:23 | INFO  | It takes a moment until task 98435e2d-fbc3-4ac3-8dce-4dbcdf7780f8 (network) has been started and output is visible here. 2025-06-11 14:35:50.494131 | orchestrator | 2025-06-11 14:35:50.494257 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-11 14:35:50.494288 | orchestrator | 2025-06-11 14:35:50.494308 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-11 14:35:50.494328 | orchestrator | Wednesday 11 June 2025 14:35:27 +0000 (0:00:00.243) 0:00:00.243 ******** 2025-06-11 14:35:50.494341 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.494354 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.494365 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.494376 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.494387 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.494398 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.494409 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.494420 | orchestrator | 2025-06-11 14:35:50.494431 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-11 14:35:50.494442 | orchestrator | Wednesday 11 June 2025 14:35:28 +0000 (0:00:00.617) 0:00:00.860 ******** 2025-06-11 14:35:50.494455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:35:50.494469 | orchestrator | 2025-06-11 14:35:50.494481 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-11 14:35:50.494492 | orchestrator | Wednesday 11 June 2025 14:35:29 +0000 (0:00:01.089) 0:00:01.950 ******** 2025-06-11 14:35:50.494502 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.494513 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.494524 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.494535 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.494577 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.494590 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.494600 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.494611 | orchestrator | 2025-06-11 14:35:50.494622 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-11 14:35:50.494651 | orchestrator | Wednesday 11 June 2025 14:35:31 +0000 (0:00:01.893) 0:00:03.843 ******** 2025-06-11 14:35:50.494662 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.494673 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.494684 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.494695 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.494705 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.494716 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.494726 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.494737 | orchestrator | 2025-06-11 14:35:50.494748 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-11 14:35:50.494759 | orchestrator | Wednesday 11 June 2025 14:35:32 +0000 (0:00:01.683) 0:00:05.527 ******** 2025-06-11 14:35:50.494770 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-11 14:35:50.494782 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-11 14:35:50.494792 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-11 14:35:50.494803 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-11 14:35:50.494840 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-11 14:35:50.494852 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-11 14:35:50.494863 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-11 14:35:50.494873 | orchestrator | 2025-06-11 14:35:50.494884 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-11 14:35:50.494895 | orchestrator | Wednesday 11 June 2025 14:35:33 +0000 (0:00:00.975) 0:00:06.503 ******** 2025-06-11 14:35:50.494906 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-11 14:35:50.494918 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-11 14:35:50.494929 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 14:35:50.494939 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:35:50.494950 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 14:35:50.494961 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-11 14:35:50.494971 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-11 14:35:50.494982 | orchestrator | 2025-06-11 14:35:50.494993 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-11 14:35:50.495004 | orchestrator | Wednesday 11 June 2025 14:35:36 +0000 (0:00:03.304) 0:00:09.807 ******** 2025-06-11 14:35:50.495015 | orchestrator | changed: [testbed-manager] 2025-06-11 14:35:50.495025 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:50.495036 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:50.495046 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:50.495057 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:50.495067 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:50.495078 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:50.495089 | orchestrator | 2025-06-11 14:35:50.495099 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-11 14:35:50.495110 | orchestrator | Wednesday 11 June 2025 14:35:38 +0000 (0:00:01.344) 0:00:11.152 ******** 2025-06-11 14:35:50.495121 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:35:50.495132 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-11 14:35:50.495142 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 14:35:50.495153 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-11 14:35:50.495163 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-11 14:35:50.495174 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 14:35:50.495184 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-11 14:35:50.495195 | orchestrator | 2025-06-11 14:35:50.495205 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-11 14:35:50.495216 | orchestrator | Wednesday 11 June 2025 14:35:39 +0000 (0:00:01.595) 0:00:12.747 ******** 2025-06-11 14:35:50.495235 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.495249 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.495259 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.495270 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.495281 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.495292 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.495302 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.495313 | orchestrator | 2025-06-11 14:35:50.495324 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-11 14:35:50.495355 | orchestrator | Wednesday 11 June 2025 14:35:40 +0000 (0:00:01.001) 0:00:13.748 ******** 2025-06-11 14:35:50.495367 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:35:50.495378 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:50.495389 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:50.495400 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:50.495410 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:50.495421 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:50.495432 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:50.495442 | orchestrator | 2025-06-11 14:35:50.495453 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-11 14:35:50.495464 | orchestrator | Wednesday 11 June 2025 14:35:41 +0000 (0:00:00.642) 0:00:14.390 ******** 2025-06-11 14:35:50.495484 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.495495 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.495505 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.495516 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.495526 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.495537 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.495572 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.495584 | orchestrator | 2025-06-11 14:35:50.495594 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-11 14:35:50.495605 | orchestrator | Wednesday 11 June 2025 14:35:43 +0000 (0:00:01.949) 0:00:16.340 ******** 2025-06-11 14:35:50.495616 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:35:50.495626 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:35:50.495637 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:35:50.495648 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:35:50.495658 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:35:50.495669 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:35:50.495681 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-11 14:35:50.495693 | orchestrator | 2025-06-11 14:35:50.495704 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-11 14:35:50.495715 | orchestrator | Wednesday 11 June 2025 14:35:44 +0000 (0:00:00.929) 0:00:17.269 ******** 2025-06-11 14:35:50.495731 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.495742 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:35:50.495753 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:35:50.495763 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:35:50.495774 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:35:50.495784 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:35:50.495795 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:35:50.495805 | orchestrator | 2025-06-11 14:35:50.495816 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-11 14:35:50.495827 | orchestrator | Wednesday 11 June 2025 14:35:46 +0000 (0:00:01.688) 0:00:18.957 ******** 2025-06-11 14:35:50.495838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:35:50.495850 | orchestrator | 2025-06-11 14:35:50.495861 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-11 14:35:50.495872 | orchestrator | Wednesday 11 June 2025 14:35:47 +0000 (0:00:01.292) 0:00:20.250 ******** 2025-06-11 14:35:50.495883 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.495894 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.495904 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.495915 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.495925 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.495936 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.495946 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.495957 | orchestrator | 2025-06-11 14:35:50.495968 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-11 14:35:50.495979 | orchestrator | Wednesday 11 June 2025 14:35:48 +0000 (0:00:00.986) 0:00:21.236 ******** 2025-06-11 14:35:50.495990 | orchestrator | ok: [testbed-manager] 2025-06-11 14:35:50.496000 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:35:50.496011 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:35:50.496021 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:35:50.496032 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:35:50.496042 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:35:50.496052 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:35:50.496063 | orchestrator | 2025-06-11 14:35:50.496073 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-11 14:35:50.496084 | orchestrator | Wednesday 11 June 2025 14:35:49 +0000 (0:00:00.856) 0:00:22.093 ******** 2025-06-11 14:35:50.496101 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496112 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496123 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496133 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496143 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496154 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496164 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496175 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496186 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496196 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496207 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496217 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496228 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-11 14:35:50.496238 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-11 14:35:50.496249 | orchestrator | 2025-06-11 14:35:50.496267 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-11 14:36:06.151712 | orchestrator | Wednesday 11 June 2025 14:35:50 +0000 (0:00:01.187) 0:00:23.281 ******** 2025-06-11 14:36:06.151829 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:36:06.151847 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:06.151859 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:06.151871 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:06.151882 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:06.151893 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:06.151904 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:06.151915 | orchestrator | 2025-06-11 14:36:06.151927 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-11 14:36:06.151939 | orchestrator | Wednesday 11 June 2025 14:35:51 +0000 (0:00:00.659) 0:00:23.940 ******** 2025-06-11 14:36:06.151952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-06-11 14:36:06.151966 | orchestrator | 2025-06-11 14:36:06.151978 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-11 14:36:06.151989 | orchestrator | Wednesday 11 June 2025 14:35:55 +0000 (0:00:04.386) 0:00:28.326 ******** 2025-06-11 14:36:06.152002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152042 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152138 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152232 | orchestrator | 2025-06-11 14:36:06.152244 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-11 14:36:06.152256 | orchestrator | Wednesday 11 June 2025 14:36:00 +0000 (0:00:05.237) 0:00:33.564 ******** 2025-06-11 14:36:06.152269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152282 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152360 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-11 14:36:06.152404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:06.152437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:12.111710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-11 14:36:12.111793 | orchestrator | 2025-06-11 14:36:12.111808 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-11 14:36:12.111821 | orchestrator | Wednesday 11 June 2025 14:36:06 +0000 (0:00:05.374) 0:00:38.938 ******** 2025-06-11 14:36:12.111834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:36:12.111846 | orchestrator | 2025-06-11 14:36:12.111857 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-11 14:36:12.111887 | orchestrator | Wednesday 11 June 2025 14:36:07 +0000 (0:00:01.272) 0:00:40.211 ******** 2025-06-11 14:36:12.111899 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:12.111910 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:36:12.111921 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:36:12.111932 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:36:12.111943 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:36:12.111953 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:36:12.111964 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:36:12.111974 | orchestrator | 2025-06-11 14:36:12.111985 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-11 14:36:12.112008 | orchestrator | Wednesday 11 June 2025 14:36:08 +0000 (0:00:01.174) 0:00:41.386 ******** 2025-06-11 14:36:12.112019 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112031 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112042 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112052 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112063 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112074 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112084 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112095 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:36:12.112107 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112118 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112129 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112139 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112150 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112161 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:12.112172 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112182 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112193 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112204 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112215 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:12.112225 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112236 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112247 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112258 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112269 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:12.112281 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112294 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112306 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112319 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112331 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:12.112342 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:12.112355 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-11 14:36:12.112374 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-11 14:36:12.112386 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-11 14:36:12.112398 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-11 14:36:12.112409 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:12.112421 | orchestrator | 2025-06-11 14:36:12.112433 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-11 14:36:12.112460 | orchestrator | Wednesday 11 June 2025 14:36:10 +0000 (0:00:02.129) 0:00:43.516 ******** 2025-06-11 14:36:12.112473 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:36:12.112485 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:12.112497 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:12.112509 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:12.112521 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:12.112534 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:12.112546 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:12.112558 | orchestrator | 2025-06-11 14:36:12.112592 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-11 14:36:12.112614 | orchestrator | Wednesday 11 June 2025 14:36:11 +0000 (0:00:00.611) 0:00:44.127 ******** 2025-06-11 14:36:12.112635 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:36:12.112654 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:12.112665 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:12.112676 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:12.112687 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:12.112697 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:12.112708 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:12.112718 | orchestrator | 2025-06-11 14:36:12.112729 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:36:12.112741 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:36:12.112752 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112770 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112781 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112792 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112802 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112813 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 14:36:12.112824 | orchestrator | 2025-06-11 14:36:12.112835 | orchestrator | 2025-06-11 14:36:12.112846 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:36:12.112857 | orchestrator | Wednesday 11 June 2025 14:36:11 +0000 (0:00:00.550) 0:00:44.677 ******** 2025-06-11 14:36:12.112868 | orchestrator | =============================================================================== 2025-06-11 14:36:12.112878 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.37s 2025-06-11 14:36:12.112889 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.24s 2025-06-11 14:36:12.112900 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.39s 2025-06-11 14:36:12.112919 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.30s 2025-06-11 14:36:12.112930 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.13s 2025-06-11 14:36:12.112940 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.95s 2025-06-11 14:36:12.112951 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-06-11 14:36:12.112961 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-06-11 14:36:12.112972 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-06-11 14:36:12.112983 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.60s 2025-06-11 14:36:12.112994 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.34s 2025-06-11 14:36:12.113004 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2025-06-11 14:36:12.113015 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-06-11 14:36:12.113026 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-06-11 14:36:12.113037 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2025-06-11 14:36:12.113047 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.09s 2025-06-11 14:36:12.113058 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.00s 2025-06-11 14:36:12.113069 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-06-11 14:36:12.113079 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-06-11 14:36:12.113090 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-06-11 14:36:12.269873 | orchestrator | + osism apply wireguard 2025-06-11 14:36:13.703941 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:36:13.704029 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:36:13.704045 | orchestrator | Registering Redlock._release_script 2025-06-11 14:36:13.754340 | orchestrator | 2025-06-11 14:36:13 | INFO  | Task 45e3f955-4701-4159-96cb-524aa73f708a (wireguard) was prepared for execution. 2025-06-11 14:36:13.754418 | orchestrator | 2025-06-11 14:36:13 | INFO  | It takes a moment until task 45e3f955-4701-4159-96cb-524aa73f708a (wireguard) has been started and output is visible here. 2025-06-11 14:36:32.627759 | orchestrator | 2025-06-11 14:36:32.627858 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-11 14:36:32.627875 | orchestrator | 2025-06-11 14:36:32.627887 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-11 14:36:32.627898 | orchestrator | Wednesday 11 June 2025 14:36:17 +0000 (0:00:00.241) 0:00:00.241 ******** 2025-06-11 14:36:32.627909 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:32.627922 | orchestrator | 2025-06-11 14:36:32.627932 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-11 14:36:32.627944 | orchestrator | Wednesday 11 June 2025 14:36:18 +0000 (0:00:01.479) 0:00:01.720 ******** 2025-06-11 14:36:32.627954 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.627966 | orchestrator | 2025-06-11 14:36:32.627977 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-11 14:36:32.627988 | orchestrator | Wednesday 11 June 2025 14:36:25 +0000 (0:00:06.376) 0:00:08.097 ******** 2025-06-11 14:36:32.627999 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628011 | orchestrator | 2025-06-11 14:36:32.628022 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-11 14:36:32.628033 | orchestrator | Wednesday 11 June 2025 14:36:25 +0000 (0:00:00.574) 0:00:08.672 ******** 2025-06-11 14:36:32.628044 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628055 | orchestrator | 2025-06-11 14:36:32.628066 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-11 14:36:32.628077 | orchestrator | Wednesday 11 June 2025 14:36:26 +0000 (0:00:00.426) 0:00:09.098 ******** 2025-06-11 14:36:32.628111 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:32.628122 | orchestrator | 2025-06-11 14:36:32.628146 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-11 14:36:32.628158 | orchestrator | Wednesday 11 June 2025 14:36:26 +0000 (0:00:00.535) 0:00:09.634 ******** 2025-06-11 14:36:32.628168 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:32.628179 | orchestrator | 2025-06-11 14:36:32.628190 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-11 14:36:32.628201 | orchestrator | Wednesday 11 June 2025 14:36:27 +0000 (0:00:00.564) 0:00:10.198 ******** 2025-06-11 14:36:32.628212 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:32.628223 | orchestrator | 2025-06-11 14:36:32.628234 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-11 14:36:32.628245 | orchestrator | Wednesday 11 June 2025 14:36:27 +0000 (0:00:00.439) 0:00:10.638 ******** 2025-06-11 14:36:32.628256 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628267 | orchestrator | 2025-06-11 14:36:32.628277 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-11 14:36:32.628289 | orchestrator | Wednesday 11 June 2025 14:36:29 +0000 (0:00:01.266) 0:00:11.904 ******** 2025-06-11 14:36:32.628301 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-11 14:36:32.628313 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628325 | orchestrator | 2025-06-11 14:36:32.628337 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-11 14:36:32.628350 | orchestrator | Wednesday 11 June 2025 14:36:29 +0000 (0:00:00.918) 0:00:12.823 ******** 2025-06-11 14:36:32.628361 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628373 | orchestrator | 2025-06-11 14:36:32.628385 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-11 14:36:32.628397 | orchestrator | Wednesday 11 June 2025 14:36:31 +0000 (0:00:01.541) 0:00:14.364 ******** 2025-06-11 14:36:32.628409 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:32.628421 | orchestrator | 2025-06-11 14:36:32.628434 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:36:32.628446 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:36:32.628459 | orchestrator | 2025-06-11 14:36:32.628471 | orchestrator | 2025-06-11 14:36:32.628483 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:36:32.628495 | orchestrator | Wednesday 11 June 2025 14:36:32 +0000 (0:00:00.838) 0:00:15.203 ******** 2025-06-11 14:36:32.628507 | orchestrator | =============================================================================== 2025-06-11 14:36:32.628519 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.38s 2025-06-11 14:36:32.628531 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.54s 2025-06-11 14:36:32.628543 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.48s 2025-06-11 14:36:32.628555 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2025-06-11 14:36:32.628566 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-06-11 14:36:32.628578 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2025-06-11 14:36:32.628590 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-06-11 14:36:32.628628 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-06-11 14:36:32.628640 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-06-11 14:36:32.628652 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-06-11 14:36:32.628663 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-06-11 14:36:32.785020 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-11 14:36:32.815472 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-11 14:36:32.815566 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-11 14:36:32.902432 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 171 0 --:--:-- --:--:-- --:--:-- 172 2025-06-11 14:36:32.915230 | orchestrator | + osism apply --environment custom workarounds 2025-06-11 14:36:34.384575 | orchestrator | 2025-06-11 14:36:34 | INFO  | Trying to run play workarounds in environment custom 2025-06-11 14:36:34.388494 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:36:34.388543 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:36:34.388557 | orchestrator | Registering Redlock._release_script 2025-06-11 14:36:34.439023 | orchestrator | 2025-06-11 14:36:34 | INFO  | Task be881f6d-ba99-4c27-925f-7c54ca178c57 (workarounds) was prepared for execution. 2025-06-11 14:36:34.439104 | orchestrator | 2025-06-11 14:36:34 | INFO  | It takes a moment until task be881f6d-ba99-4c27-925f-7c54ca178c57 (workarounds) has been started and output is visible here. 2025-06-11 14:36:58.986195 | orchestrator | 2025-06-11 14:36:58.986341 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:36:58.986361 | orchestrator | 2025-06-11 14:36:58.986374 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-11 14:36:58.986385 | orchestrator | Wednesday 11 June 2025 14:36:38 +0000 (0:00:00.160) 0:00:00.160 ******** 2025-06-11 14:36:58.986397 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986408 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986419 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986446 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986457 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986468 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986478 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-11 14:36:58.986489 | orchestrator | 2025-06-11 14:36:58.986499 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-11 14:36:58.986510 | orchestrator | 2025-06-11 14:36:58.986521 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-11 14:36:58.986531 | orchestrator | Wednesday 11 June 2025 14:36:39 +0000 (0:00:00.822) 0:00:00.983 ******** 2025-06-11 14:36:58.986542 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:58.986554 | orchestrator | 2025-06-11 14:36:58.986566 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-11 14:36:58.986576 | orchestrator | 2025-06-11 14:36:58.986588 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-11 14:36:58.986599 | orchestrator | Wednesday 11 June 2025 14:36:41 +0000 (0:00:02.474) 0:00:03.457 ******** 2025-06-11 14:36:58.986610 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:36:58.986683 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:36:58.986698 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:36:58.986710 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:36:58.986721 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:36:58.986733 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:36:58.986745 | orchestrator | 2025-06-11 14:36:58.986757 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-11 14:36:58.986770 | orchestrator | 2025-06-11 14:36:58.986782 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-11 14:36:58.986794 | orchestrator | Wednesday 11 June 2025 14:36:43 +0000 (0:00:01.923) 0:00:05.381 ******** 2025-06-11 14:36:58.986807 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986820 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986857 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986869 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986882 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986894 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-11 14:36:58.986906 | orchestrator | 2025-06-11 14:36:58.986918 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-11 14:36:58.986930 | orchestrator | Wednesday 11 June 2025 14:36:44 +0000 (0:00:01.499) 0:00:06.881 ******** 2025-06-11 14:36:58.986942 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:36:58.986954 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:36:58.986966 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:36:58.986978 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:36:58.986990 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:36:58.987002 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:36:58.987014 | orchestrator | 2025-06-11 14:36:58.987026 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-11 14:36:58.987038 | orchestrator | Wednesday 11 June 2025 14:36:48 +0000 (0:00:03.772) 0:00:10.653 ******** 2025-06-11 14:36:58.987049 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:58.987059 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:58.987070 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:58.987080 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:58.987091 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:58.987101 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:58.987112 | orchestrator | 2025-06-11 14:36:58.987122 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-11 14:36:58.987133 | orchestrator | 2025-06-11 14:36:58.987144 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-11 14:36:58.987154 | orchestrator | Wednesday 11 June 2025 14:36:49 +0000 (0:00:00.676) 0:00:11.330 ******** 2025-06-11 14:36:58.987165 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:58.987175 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:36:58.987195 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:36:58.987213 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:36:58.987230 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:36:58.987248 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:36:58.987267 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:36:58.987285 | orchestrator | 2025-06-11 14:36:58.987304 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-11 14:36:58.987320 | orchestrator | Wednesday 11 June 2025 14:36:50 +0000 (0:00:01.511) 0:00:12.842 ******** 2025-06-11 14:36:58.987331 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:58.987342 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:36:58.987352 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:36:58.987363 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:36:58.987373 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:36:58.987384 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:36:58.987414 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:36:58.987425 | orchestrator | 2025-06-11 14:36:58.987436 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-11 14:36:58.987447 | orchestrator | Wednesday 11 June 2025 14:36:52 +0000 (0:00:01.594) 0:00:14.437 ******** 2025-06-11 14:36:58.987457 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:36:58.987468 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:36:58.987478 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:36:58.987489 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:58.987499 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:36:58.987510 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:36:58.987530 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:36:58.987540 | orchestrator | 2025-06-11 14:36:58.987551 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-11 14:36:58.987568 | orchestrator | Wednesday 11 June 2025 14:36:54 +0000 (0:00:01.526) 0:00:15.963 ******** 2025-06-11 14:36:58.987579 | orchestrator | changed: [testbed-manager] 2025-06-11 14:36:58.987590 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:36:58.987600 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:36:58.987611 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:36:58.987659 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:36:58.987672 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:36:58.987682 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:36:58.987693 | orchestrator | 2025-06-11 14:36:58.987704 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-11 14:36:58.987715 | orchestrator | Wednesday 11 June 2025 14:36:55 +0000 (0:00:01.742) 0:00:17.706 ******** 2025-06-11 14:36:58.987726 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:36:58.987744 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:36:58.987762 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:36:58.987780 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:36:58.987797 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:36:58.987812 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:36:58.987827 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:36:58.987844 | orchestrator | 2025-06-11 14:36:58.987862 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-11 14:36:58.987881 | orchestrator | 2025-06-11 14:36:58.987900 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-11 14:36:58.987918 | orchestrator | Wednesday 11 June 2025 14:36:56 +0000 (0:00:00.598) 0:00:18.304 ******** 2025-06-11 14:36:58.987932 | orchestrator | ok: [testbed-manager] 2025-06-11 14:36:58.987943 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:36:58.987954 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:36:58.987964 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:36:58.987975 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:36:58.987985 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:36:58.987996 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:36:58.988006 | orchestrator | 2025-06-11 14:36:58.988017 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:36:58.988030 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:36:58.988042 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988053 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988064 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988075 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988085 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988096 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:36:58.988107 | orchestrator | 2025-06-11 14:36:58.988118 | orchestrator | 2025-06-11 14:36:58.988128 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:36:58.988139 | orchestrator | Wednesday 11 June 2025 14:36:58 +0000 (0:00:02.593) 0:00:20.898 ******** 2025-06-11 14:36:58.988159 | orchestrator | =============================================================================== 2025-06-11 14:36:58.988170 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.77s 2025-06-11 14:36:58.988184 | orchestrator | Install python3-docker -------------------------------------------------- 2.59s 2025-06-11 14:36:58.988203 | orchestrator | Apply netplan configuration --------------------------------------------- 2.47s 2025-06-11 14:36:58.988220 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2025-06-11 14:36:58.988236 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-06-11 14:36:58.988252 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-06-11 14:36:58.988269 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2025-06-11 14:36:58.988289 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.51s 2025-06-11 14:36:58.988307 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-06-11 14:36:58.988326 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-06-11 14:36:58.988337 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2025-06-11 14:36:58.988358 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-06-11 14:36:59.559599 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-11 14:37:01.215990 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:37:01.216093 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:37:01.216109 | orchestrator | Registering Redlock._release_script 2025-06-11 14:37:01.282675 | orchestrator | 2025-06-11 14:37:01 | INFO  | Task 1c0ddbe8-009c-46c0-a9ed-06f685e69b33 (reboot) was prepared for execution. 2025-06-11 14:37:01.282777 | orchestrator | 2025-06-11 14:37:01 | INFO  | It takes a moment until task 1c0ddbe8-009c-46c0-a9ed-06f685e69b33 (reboot) has been started and output is visible here. 2025-06-11 14:37:10.756797 | orchestrator | 2025-06-11 14:37:10.756916 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.756933 | orchestrator | 2025-06-11 14:37:10.756946 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.756958 | orchestrator | Wednesday 11 June 2025 14:37:05 +0000 (0:00:00.172) 0:00:00.172 ******** 2025-06-11 14:37:10.756970 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:37:10.756982 | orchestrator | 2025-06-11 14:37:10.756993 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757004 | orchestrator | Wednesday 11 June 2025 14:37:05 +0000 (0:00:00.130) 0:00:00.302 ******** 2025-06-11 14:37:10.757015 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:37:10.757026 | orchestrator | 2025-06-11 14:37:10.757037 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757048 | orchestrator | Wednesday 11 June 2025 14:37:06 +0000 (0:00:00.906) 0:00:01.208 ******** 2025-06-11 14:37:10.757059 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:37:10.757070 | orchestrator | 2025-06-11 14:37:10.757080 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.757091 | orchestrator | 2025-06-11 14:37:10.757102 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.757113 | orchestrator | Wednesday 11 June 2025 14:37:06 +0000 (0:00:00.098) 0:00:01.307 ******** 2025-06-11 14:37:10.757124 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:37:10.757135 | orchestrator | 2025-06-11 14:37:10.757146 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757157 | orchestrator | Wednesday 11 June 2025 14:37:06 +0000 (0:00:00.098) 0:00:01.405 ******** 2025-06-11 14:37:10.757167 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:37:10.757178 | orchestrator | 2025-06-11 14:37:10.757189 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757200 | orchestrator | Wednesday 11 June 2025 14:37:07 +0000 (0:00:00.620) 0:00:02.025 ******** 2025-06-11 14:37:10.757234 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:37:10.757245 | orchestrator | 2025-06-11 14:37:10.757256 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.757267 | orchestrator | 2025-06-11 14:37:10.757277 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.757289 | orchestrator | Wednesday 11 June 2025 14:37:07 +0000 (0:00:00.098) 0:00:02.124 ******** 2025-06-11 14:37:10.757301 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:37:10.757313 | orchestrator | 2025-06-11 14:37:10.757326 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757338 | orchestrator | Wednesday 11 June 2025 14:37:07 +0000 (0:00:00.143) 0:00:02.267 ******** 2025-06-11 14:37:10.757349 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:37:10.757361 | orchestrator | 2025-06-11 14:37:10.757373 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757386 | orchestrator | Wednesday 11 June 2025 14:37:07 +0000 (0:00:00.627) 0:00:02.895 ******** 2025-06-11 14:37:10.757398 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:37:10.757410 | orchestrator | 2025-06-11 14:37:10.757439 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.757452 | orchestrator | 2025-06-11 14:37:10.757464 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.757476 | orchestrator | Wednesday 11 June 2025 14:37:07 +0000 (0:00:00.099) 0:00:02.994 ******** 2025-06-11 14:37:10.757488 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:37:10.757501 | orchestrator | 2025-06-11 14:37:10.757514 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757525 | orchestrator | Wednesday 11 June 2025 14:37:08 +0000 (0:00:00.088) 0:00:03.083 ******** 2025-06-11 14:37:10.757536 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:37:10.757547 | orchestrator | 2025-06-11 14:37:10.757558 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757569 | orchestrator | Wednesday 11 June 2025 14:37:08 +0000 (0:00:00.619) 0:00:03.703 ******** 2025-06-11 14:37:10.757579 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:37:10.757590 | orchestrator | 2025-06-11 14:37:10.757601 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.757612 | orchestrator | 2025-06-11 14:37:10.757623 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.757665 | orchestrator | Wednesday 11 June 2025 14:37:08 +0000 (0:00:00.101) 0:00:03.804 ******** 2025-06-11 14:37:10.757677 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:37:10.757688 | orchestrator | 2025-06-11 14:37:10.757698 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757709 | orchestrator | Wednesday 11 June 2025 14:37:08 +0000 (0:00:00.091) 0:00:03.896 ******** 2025-06-11 14:37:10.757720 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:37:10.757730 | orchestrator | 2025-06-11 14:37:10.757741 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757752 | orchestrator | Wednesday 11 June 2025 14:37:09 +0000 (0:00:00.672) 0:00:04.569 ******** 2025-06-11 14:37:10.757763 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:37:10.757773 | orchestrator | 2025-06-11 14:37:10.757784 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-11 14:37:10.757794 | orchestrator | 2025-06-11 14:37:10.757805 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-11 14:37:10.757816 | orchestrator | Wednesday 11 June 2025 14:37:09 +0000 (0:00:00.105) 0:00:04.675 ******** 2025-06-11 14:37:10.757826 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:37:10.757837 | orchestrator | 2025-06-11 14:37:10.757848 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-11 14:37:10.757858 | orchestrator | Wednesday 11 June 2025 14:37:09 +0000 (0:00:00.108) 0:00:04.783 ******** 2025-06-11 14:37:10.757877 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:37:10.757888 | orchestrator | 2025-06-11 14:37:10.757898 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-11 14:37:10.757909 | orchestrator | Wednesday 11 June 2025 14:37:10 +0000 (0:00:00.664) 0:00:05.448 ******** 2025-06-11 14:37:10.757944 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:37:10.757956 | orchestrator | 2025-06-11 14:37:10.757967 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:37:10.757983 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.757995 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.758007 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.758074 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.758086 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.758097 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:37:10.758108 | orchestrator | 2025-06-11 14:37:10.758119 | orchestrator | 2025-06-11 14:37:10.758130 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:37:10.758140 | orchestrator | Wednesday 11 June 2025 14:37:10 +0000 (0:00:00.034) 0:00:05.483 ******** 2025-06-11 14:37:10.758151 | orchestrator | =============================================================================== 2025-06-11 14:37:10.758162 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.11s 2025-06-11 14:37:10.758172 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-06-11 14:37:10.758216 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2025-06-11 14:37:10.994803 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-11 14:37:12.639765 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:37:12.639872 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:37:12.639889 | orchestrator | Registering Redlock._release_script 2025-06-11 14:37:12.701111 | orchestrator | 2025-06-11 14:37:12 | INFO  | Task fdd24524-dd4a-46e5-bd29-d64bb79f4d57 (wait-for-connection) was prepared for execution. 2025-06-11 14:37:12.701198 | orchestrator | 2025-06-11 14:37:12 | INFO  | It takes a moment until task fdd24524-dd4a-46e5-bd29-d64bb79f4d57 (wait-for-connection) has been started and output is visible here. 2025-06-11 14:37:29.533843 | orchestrator | 2025-06-11 14:37:29.533997 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-11 14:37:29.534095 | orchestrator | 2025-06-11 14:37:29.534118 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-11 14:37:29.534139 | orchestrator | Wednesday 11 June 2025 14:37:16 +0000 (0:00:00.248) 0:00:00.248 ******** 2025-06-11 14:37:29.534158 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:37:29.534178 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:37:29.534198 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:37:29.534217 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:37:29.534236 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:37:29.534256 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:37:29.534276 | orchestrator | 2025-06-11 14:37:29.534297 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:37:29.534319 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534375 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534398 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534417 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534436 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534455 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:29.534474 | orchestrator | 2025-06-11 14:37:29.534493 | orchestrator | 2025-06-11 14:37:29.534513 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:37:29.534601 | orchestrator | Wednesday 11 June 2025 14:37:29 +0000 (0:00:12.468) 0:00:12.717 ******** 2025-06-11 14:37:29.534624 | orchestrator | =============================================================================== 2025-06-11 14:37:29.534644 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.47s 2025-06-11 14:37:29.763528 | orchestrator | + osism apply hddtemp 2025-06-11 14:37:31.373903 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:37:31.374091 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:37:31.374110 | orchestrator | Registering Redlock._release_script 2025-06-11 14:37:31.431272 | orchestrator | 2025-06-11 14:37:31 | INFO  | Task e3f4f7e9-b5d1-4d9b-99b3-41319d60f4ae (hddtemp) was prepared for execution. 2025-06-11 14:37:31.431379 | orchestrator | 2025-06-11 14:37:31 | INFO  | It takes a moment until task e3f4f7e9-b5d1-4d9b-99b3-41319d60f4ae (hddtemp) has been started and output is visible here. 2025-06-11 14:37:57.946176 | orchestrator | 2025-06-11 14:37:57.946322 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-11 14:37:57.946340 | orchestrator | 2025-06-11 14:37:57.946352 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-11 14:37:57.946364 | orchestrator | Wednesday 11 June 2025 14:37:35 +0000 (0:00:00.253) 0:00:00.253 ******** 2025-06-11 14:37:57.946375 | orchestrator | ok: [testbed-manager] 2025-06-11 14:37:57.946388 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:37:57.946398 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:37:57.946409 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:37:57.946420 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:37:57.946446 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:37:57.946457 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:37:57.946468 | orchestrator | 2025-06-11 14:37:57.946479 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-11 14:37:57.946490 | orchestrator | Wednesday 11 June 2025 14:37:35 +0000 (0:00:00.655) 0:00:00.908 ******** 2025-06-11 14:37:57.946504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:37:57.946518 | orchestrator | 2025-06-11 14:37:57.946530 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-11 14:37:57.946541 | orchestrator | Wednesday 11 June 2025 14:37:37 +0000 (0:00:01.013) 0:00:01.922 ******** 2025-06-11 14:37:57.946552 | orchestrator | ok: [testbed-manager] 2025-06-11 14:37:57.946563 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:37:57.946574 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:37:57.946584 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:37:57.946595 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:37:57.946605 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:37:57.946616 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:37:57.946627 | orchestrator | 2025-06-11 14:37:57.946665 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-11 14:37:57.946701 | orchestrator | Wednesday 11 June 2025 14:37:38 +0000 (0:00:01.886) 0:00:03.809 ******** 2025-06-11 14:37:57.946713 | orchestrator | changed: [testbed-manager] 2025-06-11 14:37:57.946726 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:37:57.946738 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:37:57.946750 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:37:57.946762 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:37:57.946774 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:37:57.946787 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:37:57.946798 | orchestrator | 2025-06-11 14:37:57.946809 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-11 14:37:57.946819 | orchestrator | Wednesday 11 June 2025 14:37:39 +0000 (0:00:01.046) 0:00:04.855 ******** 2025-06-11 14:37:57.946830 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:37:57.946841 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:37:57.946851 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:37:57.946862 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:37:57.946873 | orchestrator | ok: [testbed-manager] 2025-06-11 14:37:57.946883 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:37:57.946894 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:37:57.946905 | orchestrator | 2025-06-11 14:37:57.946915 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-11 14:37:57.946926 | orchestrator | Wednesday 11 June 2025 14:37:41 +0000 (0:00:01.111) 0:00:05.967 ******** 2025-06-11 14:37:57.946937 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:37:57.946947 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:37:57.946958 | orchestrator | changed: [testbed-manager] 2025-06-11 14:37:57.946969 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:37:57.946979 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:37:57.946990 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:37:57.947000 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:37:57.947011 | orchestrator | 2025-06-11 14:37:57.947022 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-11 14:37:57.947033 | orchestrator | Wednesday 11 June 2025 14:37:41 +0000 (0:00:00.812) 0:00:06.779 ******** 2025-06-11 14:37:57.947043 | orchestrator | changed: [testbed-manager] 2025-06-11 14:37:57.947054 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:37:57.947064 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:37:57.947075 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:37:57.947085 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:37:57.947096 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:37:57.947107 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:37:57.947118 | orchestrator | 2025-06-11 14:37:57.947176 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-11 14:37:57.947187 | orchestrator | Wednesday 11 June 2025 14:37:54 +0000 (0:00:12.451) 0:00:19.231 ******** 2025-06-11 14:37:57.947199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:37:57.947210 | orchestrator | 2025-06-11 14:37:57.947221 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-11 14:37:57.947232 | orchestrator | Wednesday 11 June 2025 14:37:55 +0000 (0:00:01.363) 0:00:20.594 ******** 2025-06-11 14:37:57.947242 | orchestrator | changed: [testbed-manager] 2025-06-11 14:37:57.947253 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:37:57.947263 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:37:57.947274 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:37:57.947284 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:37:57.947294 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:37:57.947305 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:37:57.947315 | orchestrator | 2025-06-11 14:37:57.947326 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:37:57.947361 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:37:57.947392 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947405 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947415 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947427 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947438 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947449 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:37:57.947460 | orchestrator | 2025-06-11 14:37:57.947471 | orchestrator | 2025-06-11 14:37:57.947482 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:37:57.947493 | orchestrator | Wednesday 11 June 2025 14:37:57 +0000 (0:00:01.896) 0:00:22.491 ******** 2025-06-11 14:37:57.947505 | orchestrator | =============================================================================== 2025-06-11 14:37:57.947516 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.45s 2025-06-11 14:37:57.947527 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2025-06-11 14:37:57.947538 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.89s 2025-06-11 14:37:57.947549 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.36s 2025-06-11 14:37:57.947560 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.11s 2025-06-11 14:37:57.947571 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.05s 2025-06-11 14:37:57.947582 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2025-06-11 14:37:57.947593 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-06-11 14:37:57.947604 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.66s 2025-06-11 14:37:58.178247 | orchestrator | ++ semver latest 7.1.1 2025-06-11 14:37:58.222526 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-11 14:37:58.222605 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 14:37:58.222616 | orchestrator | + sudo systemctl restart manager.service 2025-06-11 14:38:37.287882 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-11 14:38:37.287964 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-11 14:38:37.287971 | orchestrator | + local max_attempts=60 2025-06-11 14:38:37.287976 | orchestrator | + local name=ceph-ansible 2025-06-11 14:38:37.287980 | orchestrator | + local attempt_num=1 2025-06-11 14:38:37.287985 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:38:37.321466 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:38:37.321563 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:38:37.321569 | orchestrator | + sleep 5 2025-06-11 14:38:42.331906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:38:42.367158 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:38:42.367254 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:38:42.367269 | orchestrator | + sleep 5 2025-06-11 14:38:47.369670 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:38:47.396126 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:38:47.396193 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:38:47.396234 | orchestrator | + sleep 5 2025-06-11 14:38:52.399203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:38:52.432355 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:38:52.432449 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:38:52.432473 | orchestrator | + sleep 5 2025-06-11 14:38:57.437292 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:38:57.480547 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:38:57.480640 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:38:57.480654 | orchestrator | + sleep 5 2025-06-11 14:39:02.484935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:02.522251 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:02.522356 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:02.522371 | orchestrator | + sleep 5 2025-06-11 14:39:07.526295 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:07.565019 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:07.565127 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:07.565143 | orchestrator | + sleep 5 2025-06-11 14:39:12.572068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:12.610455 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:12.610527 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:12.610541 | orchestrator | + sleep 5 2025-06-11 14:39:17.610951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:17.632580 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:17.632659 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:17.632674 | orchestrator | + sleep 5 2025-06-11 14:39:22.635261 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:22.669341 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:22.669426 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:22.669440 | orchestrator | + sleep 5 2025-06-11 14:39:27.673581 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:27.716403 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:27.716479 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:27.716493 | orchestrator | + sleep 5 2025-06-11 14:39:32.721690 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:32.756454 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:32.756562 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:32.756597 | orchestrator | + sleep 5 2025-06-11 14:39:37.760690 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:37.802874 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:37.802962 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-11 14:39:37.802977 | orchestrator | + sleep 5 2025-06-11 14:39:42.805519 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-11 14:39:42.835558 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:42.835638 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-11 14:39:42.835662 | orchestrator | + local max_attempts=60 2025-06-11 14:39:42.835683 | orchestrator | + local name=kolla-ansible 2025-06-11 14:39:42.835703 | orchestrator | + local attempt_num=1 2025-06-11 14:39:42.837310 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-11 14:39:42.859699 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:42.859781 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-11 14:39:42.859795 | orchestrator | + local max_attempts=60 2025-06-11 14:39:42.859806 | orchestrator | + local name=osism-ansible 2025-06-11 14:39:42.859817 | orchestrator | + local attempt_num=1 2025-06-11 14:39:42.859829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-11 14:39:42.893624 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-11 14:39:42.893695 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-11 14:39:42.893707 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-11 14:39:43.034148 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-11 14:39:43.170871 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-11 14:39:43.452160 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-11 14:39:43.452458 | orchestrator | + osism apply gather-facts 2025-06-11 14:39:44.999844 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:39:44.999970 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:39:45.000801 | orchestrator | Registering Redlock._release_script 2025-06-11 14:39:45.065216 | orchestrator | 2025-06-11 14:39:45 | INFO  | Task 6b4bc6a3-0966-421b-8715-fd1fca3a1154 (gather-facts) was prepared for execution. 2025-06-11 14:39:45.065307 | orchestrator | 2025-06-11 14:39:45 | INFO  | It takes a moment until task 6b4bc6a3-0966-421b-8715-fd1fca3a1154 (gather-facts) has been started and output is visible here. 2025-06-11 14:39:54.918994 | orchestrator | 2025-06-11 14:39:54.919088 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 14:39:54.919103 | orchestrator | 2025-06-11 14:39:54.919115 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:39:54.919127 | orchestrator | Wednesday 11 June 2025 14:39:48 +0000 (0:00:00.172) 0:00:00.173 ******** 2025-06-11 14:39:54.919138 | orchestrator | ok: [testbed-manager] 2025-06-11 14:39:54.919150 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:39:54.919161 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:39:54.919172 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:39:54.919182 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:39:54.919193 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:39:54.919204 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:39:54.919215 | orchestrator | 2025-06-11 14:39:54.919226 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-11 14:39:54.919238 | orchestrator | 2025-06-11 14:39:54.919249 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-11 14:39:54.919260 | orchestrator | Wednesday 11 June 2025 14:39:54 +0000 (0:00:05.565) 0:00:05.738 ******** 2025-06-11 14:39:54.919271 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:39:54.919283 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:39:54.919294 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:39:54.919305 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:39:54.919315 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:39:54.919330 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:39:54.919348 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:39:54.919359 | orchestrator | 2025-06-11 14:39:54.919370 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:39:54.919381 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919394 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919405 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919415 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919426 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919437 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919448 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 14:39:54.919459 | orchestrator | 2025-06-11 14:39:54.919470 | orchestrator | 2025-06-11 14:39:54.919480 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:39:54.919493 | orchestrator | Wednesday 11 June 2025 14:39:54 +0000 (0:00:00.479) 0:00:06.218 ******** 2025-06-11 14:39:54.919511 | orchestrator | =============================================================================== 2025-06-11 14:39:54.919522 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.57s 2025-06-11 14:39:54.919560 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-06-11 14:39:55.161275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-11 14:39:55.174699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-11 14:39:55.186812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-11 14:39:55.204253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-11 14:39:55.216666 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-11 14:39:55.229644 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-11 14:39:55.247965 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-11 14:39:55.259247 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-11 14:39:55.275100 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-11 14:39:55.285928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-11 14:39:55.301818 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-11 14:39:55.319584 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-11 14:39:55.339702 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-11 14:39:55.358990 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-11 14:39:55.373554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-11 14:39:55.385979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-11 14:39:55.403467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-11 14:39:55.418201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-11 14:39:55.435422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-11 14:39:55.453073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-11 14:39:55.472302 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-11 14:39:55.798051 | orchestrator | ok: Runtime: 0:20:13.341880 2025-06-11 14:39:55.894409 | 2025-06-11 14:39:55.894542 | TASK [Deploy services] 2025-06-11 14:39:56.427207 | orchestrator | skipping: Conditional result was False 2025-06-11 14:39:56.443213 | 2025-06-11 14:39:56.443383 | TASK [Deploy in a nutshell] 2025-06-11 14:39:57.154006 | orchestrator | + set -e 2025-06-11 14:39:57.154144 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 14:39:57.154155 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 14:39:57.154164 | orchestrator | ++ INTERACTIVE=false 2025-06-11 14:39:57.154170 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 14:39:57.154175 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 14:39:57.154181 | orchestrator | + source /opt/manager-vars.sh 2025-06-11 14:39:57.154275 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-11 14:39:57.154290 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-11 14:39:57.154296 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-11 14:39:57.154302 | orchestrator | ++ CEPH_VERSION=reef 2025-06-11 14:39:57.154307 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-11 14:39:57.154315 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-11 14:39:57.154319 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 14:39:57.154329 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 14:39:57.154333 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-11 14:39:57.154346 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-11 14:39:57.154351 | orchestrator | ++ export ARA=false 2025-06-11 14:39:57.154355 | orchestrator | ++ ARA=false 2025-06-11 14:39:57.154359 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-11 14:39:57.154364 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-11 14:39:57.154368 | orchestrator | ++ export TEMPEST=false 2025-06-11 14:39:57.154372 | orchestrator | ++ TEMPEST=false 2025-06-11 14:39:57.154376 | orchestrator | ++ export IS_ZUUL=true 2025-06-11 14:39:57.154380 | orchestrator | ++ IS_ZUUL=true 2025-06-11 14:39:57.154384 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:39:57.154389 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 14:39:57.154429 | orchestrator | 2025-06-11 14:39:57.154435 | orchestrator | # PULL IMAGES 2025-06-11 14:39:57.154439 | orchestrator | 2025-06-11 14:39:57.154450 | orchestrator | ++ export EXTERNAL_API=false 2025-06-11 14:39:57.154455 | orchestrator | ++ EXTERNAL_API=false 2025-06-11 14:39:57.154459 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-11 14:39:57.154463 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-11 14:39:57.154467 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-11 14:39:57.154471 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-11 14:39:57.154475 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-11 14:39:57.154483 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-11 14:39:57.154487 | orchestrator | + echo 2025-06-11 14:39:57.154492 | orchestrator | + echo '# PULL IMAGES' 2025-06-11 14:39:57.154496 | orchestrator | + echo 2025-06-11 14:39:57.155902 | orchestrator | ++ semver latest 7.0.0 2025-06-11 14:39:57.215663 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-11 14:39:57.215828 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 14:39:57.215847 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-11 14:39:58.858499 | orchestrator | 2025-06-11 14:39:58 | INFO  | Trying to run play pull-images in environment custom 2025-06-11 14:39:58.863448 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:39:58.863516 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:39:58.863529 | orchestrator | Registering Redlock._release_script 2025-06-11 14:39:58.925797 | orchestrator | 2025-06-11 14:39:58 | INFO  | Task 5ffe8f65-1e76-465a-8fb0-7c42450d14d7 (pull-images) was prepared for execution. 2025-06-11 14:39:58.925880 | orchestrator | 2025-06-11 14:39:58 | INFO  | It takes a moment until task 5ffe8f65-1e76-465a-8fb0-7c42450d14d7 (pull-images) has been started and output is visible here. 2025-06-11 14:41:59.176692 | orchestrator | 2025-06-11 14:41:59.176891 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-11 14:41:59.177470 | orchestrator | 2025-06-11 14:41:59.177494 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-11 14:41:59.177517 | orchestrator | Wednesday 11 June 2025 14:40:02 +0000 (0:00:00.160) 0:00:00.160 ******** 2025-06-11 14:41:59.177530 | orchestrator | changed: [testbed-manager] 2025-06-11 14:41:59.177544 | orchestrator | 2025-06-11 14:41:59.177557 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-11 14:41:59.177570 | orchestrator | Wednesday 11 June 2025 14:41:09 +0000 (0:01:06.345) 0:01:06.506 ******** 2025-06-11 14:41:59.177582 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-11 14:41:59.177597 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-11 14:41:59.177608 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-11 14:41:59.177653 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-11 14:41:59.177679 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-11 14:41:59.177699 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-11 14:41:59.177718 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-11 14:41:59.177737 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-11 14:41:59.177756 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-11 14:41:59.177776 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-11 14:41:59.177827 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-11 14:41:59.177847 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-11 14:41:59.177868 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-11 14:41:59.177887 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-11 14:41:59.177906 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-11 14:41:59.177926 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-11 14:41:59.177945 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-11 14:41:59.177964 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-11 14:41:59.177993 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-11 14:41:59.178076 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-11 14:41:59.178104 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-11 14:41:59.178123 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-11 14:41:59.178142 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-11 14:41:59.178162 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-11 14:41:59.178180 | orchestrator | 2025-06-11 14:41:59.178197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:41:59.178209 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:41:59.178229 | orchestrator | 2025-06-11 14:41:59.178247 | orchestrator | 2025-06-11 14:41:59.178265 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:41:59.178285 | orchestrator | Wednesday 11 June 2025 14:41:58 +0000 (0:00:49.827) 0:01:56.334 ******** 2025-06-11 14:41:59.178306 | orchestrator | =============================================================================== 2025-06-11 14:41:59.178324 | orchestrator | Pull keystone image ---------------------------------------------------- 66.35s 2025-06-11 14:41:59.178344 | orchestrator | Pull other images ------------------------------------------------------ 49.83s 2025-06-11 14:42:01.230303 | orchestrator | 2025-06-11 14:42:01 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-11 14:42:01.236415 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:42:01.236476 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:42:01.236491 | orchestrator | Registering Redlock._release_script 2025-06-11 14:42:01.330519 | orchestrator | 2025-06-11 14:42:01 | INFO  | Task e074080c-706f-45cb-be15-a77a5c4ef77f (wipe-partitions) was prepared for execution. 2025-06-11 14:42:01.330617 | orchestrator | 2025-06-11 14:42:01 | INFO  | It takes a moment until task e074080c-706f-45cb-be15-a77a5c4ef77f (wipe-partitions) has been started and output is visible here. 2025-06-11 14:42:13.465286 | orchestrator | 2025-06-11 14:42:13.465382 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-11 14:42:13.465402 | orchestrator | 2025-06-11 14:42:13.465416 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-11 14:42:13.465430 | orchestrator | Wednesday 11 June 2025 14:42:05 +0000 (0:00:00.133) 0:00:00.133 ******** 2025-06-11 14:42:13.465442 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:42:13.465457 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:42:13.465471 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:42:13.465485 | orchestrator | 2025-06-11 14:42:13.465508 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-11 14:42:13.465541 | orchestrator | Wednesday 11 June 2025 14:42:05 +0000 (0:00:00.546) 0:00:00.680 ******** 2025-06-11 14:42:13.465555 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:13.465568 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:13.465582 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:42:13.465595 | orchestrator | 2025-06-11 14:42:13.465608 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-11 14:42:13.465622 | orchestrator | Wednesday 11 June 2025 14:42:05 +0000 (0:00:00.281) 0:00:00.962 ******** 2025-06-11 14:42:13.465636 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:42:13.465650 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:42:13.465663 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:42:13.465675 | orchestrator | 2025-06-11 14:42:13.465688 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-11 14:42:13.465701 | orchestrator | Wednesday 11 June 2025 14:42:06 +0000 (0:00:00.696) 0:00:01.658 ******** 2025-06-11 14:42:13.465714 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:13.465726 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:13.465738 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:42:13.465750 | orchestrator | 2025-06-11 14:42:13.465767 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-11 14:42:13.465780 | orchestrator | Wednesday 11 June 2025 14:42:06 +0000 (0:00:00.271) 0:00:01.929 ******** 2025-06-11 14:42:13.465836 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-11 14:42:13.465851 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-11 14:42:13.465864 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-11 14:42:13.465877 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-11 14:42:13.465890 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-11 14:42:13.465902 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-11 14:42:13.465915 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-11 14:42:13.465928 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-11 14:42:13.465942 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-11 14:42:13.465955 | orchestrator | 2025-06-11 14:42:13.465967 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-11 14:42:13.465978 | orchestrator | Wednesday 11 June 2025 14:42:08 +0000 (0:00:01.246) 0:00:03.176 ******** 2025-06-11 14:42:13.465989 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-11 14:42:13.466000 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-11 14:42:13.466010 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-11 14:42:13.466071 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-11 14:42:13.466085 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-11 14:42:13.466097 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-11 14:42:13.466109 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-11 14:42:13.466121 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-11 14:42:13.466133 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-11 14:42:13.466144 | orchestrator | 2025-06-11 14:42:13.466156 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-11 14:42:13.466170 | orchestrator | Wednesday 11 June 2025 14:42:09 +0000 (0:00:01.445) 0:00:04.621 ******** 2025-06-11 14:42:13.466182 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-11 14:42:13.466195 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-11 14:42:13.466207 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-11 14:42:13.466218 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-11 14:42:13.466228 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-11 14:42:13.466240 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-11 14:42:13.466251 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-11 14:42:13.466272 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-11 14:42:13.466283 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-11 14:42:13.466294 | orchestrator | 2025-06-11 14:42:13.466305 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-11 14:42:13.466316 | orchestrator | Wednesday 11 June 2025 14:42:11 +0000 (0:00:02.310) 0:00:06.931 ******** 2025-06-11 14:42:13.466328 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:42:13.466339 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:42:13.466351 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:42:13.466362 | orchestrator | 2025-06-11 14:42:13.466373 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-11 14:42:13.466384 | orchestrator | Wednesday 11 June 2025 14:42:12 +0000 (0:00:00.610) 0:00:07.542 ******** 2025-06-11 14:42:13.466396 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:42:13.466413 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:42:13.466424 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:42:13.466436 | orchestrator | 2025-06-11 14:42:13.466447 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:42:13.466460 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:42:13.466472 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:42:13.466502 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:42:13.466516 | orchestrator | 2025-06-11 14:42:13.466528 | orchestrator | 2025-06-11 14:42:13.466539 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:42:13.466551 | orchestrator | Wednesday 11 June 2025 14:42:13 +0000 (0:00:00.648) 0:00:08.191 ******** 2025-06-11 14:42:13.466563 | orchestrator | =============================================================================== 2025-06-11 14:42:13.466575 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.31s 2025-06-11 14:42:13.466587 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.45s 2025-06-11 14:42:13.466599 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2025-06-11 14:42:13.466611 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-06-11 14:42:13.466623 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-06-11 14:42:13.466634 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-06-11 14:42:13.466646 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-06-11 14:42:13.466657 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-06-11 14:42:13.466669 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-06-11 14:42:15.142338 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:42:15.142423 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:42:15.142436 | orchestrator | Registering Redlock._release_script 2025-06-11 14:42:15.196128 | orchestrator | 2025-06-11 14:42:15 | INFO  | Task ded8e06c-3f18-4266-af0b-7518b740fae3 (facts) was prepared for execution. 2025-06-11 14:42:15.196222 | orchestrator | 2025-06-11 14:42:15 | INFO  | It takes a moment until task ded8e06c-3f18-4266-af0b-7518b740fae3 (facts) has been started and output is visible here. 2025-06-11 14:42:20.867864 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-06-11 14:42:20.867963 | orchestrator | -vvvv to see details 2025-06-11 14:42:20.867980 | orchestrator | 2025-06-11 14:42:20.867992 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-11 14:42:20.868004 | orchestrator | 2025-06-11 14:42:20.868018 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 14:42:20.868052 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868076 | orchestrator | ...ignoring 2025-06-11 14:42:20.868088 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868099 | orchestrator | ...ignoring 2025-06-11 14:42:20.868110 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868121 | orchestrator | ...ignoring 2025-06-11 14:42:20.868139 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868151 | orchestrator | ...ignoring 2025-06-11 14:42:20.868162 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868173 | orchestrator | ...ignoring 2025-06-11 14:42:20.868184 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868195 | orchestrator | ...ignoring 2025-06-11 14:42:20.868206 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868217 | orchestrator | ...ignoring 2025-06-11 14:42:20.868228 | orchestrator | 2025-06-11 14:42:20.868238 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-11 14:42:20.868250 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868261 | orchestrator | ...ignoring 2025-06-11 14:42:20.868272 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868283 | orchestrator | ...ignoring 2025-06-11 14:42:20.868293 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868314 | orchestrator | ...ignoring 2025-06-11 14:42:20.868343 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868357 | orchestrator | ...ignoring 2025-06-11 14:42:20.868370 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868382 | orchestrator | ...ignoring 2025-06-11 14:42:20.868395 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868407 | orchestrator | ...ignoring 2025-06-11 14:42:20.868419 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868431 | orchestrator | ...ignoring 2025-06-11 14:42:20.868443 | orchestrator | 2025-06-11 14:42:20.868455 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-11 14:42:20.868468 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:42:20.868480 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:42:20.868492 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:42:20.868505 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:42:20.868517 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:20.868528 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:20.868538 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:42:20.868549 | orchestrator | 2025-06-11 14:42:20.868560 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 14:42:20.868570 | orchestrator | 2025-06-11 14:42:20.868581 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:42:20.868592 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868603 | orchestrator | ...ignoring 2025-06-11 14:42:20.868613 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868624 | orchestrator | ...ignoring 2025-06-11 14:42:20.868640 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868651 | orchestrator | ...ignoring 2025-06-11 14:42:20.868662 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868682 | orchestrator | ...ignoring 2025-06-11 14:42:20.868694 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868705 | orchestrator | ...ignoring 2025-06-11 14:42:20.868716 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:20.868727 | orchestrator | ...ignoring 2025-06-11 14:42:20.868745 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-06-11 14:42:21.141977 | orchestrator | ...ignoring 2025-06-11 14:42:21.142083 | orchestrator | 2025-06-11 14:42:21.142096 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-11 14:42:21.142105 | orchestrator | 2025-06-11 14:42:21.142113 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-11 14:42:21.142121 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:42:21.142129 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:42:21.142137 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:42:21.142144 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:42:21.142152 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:21.142160 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:21.142167 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:42:21.142175 | orchestrator | 2025-06-11 14:42:21.142182 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:42:21.142191 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142213 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142221 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142229 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142237 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142245 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142253 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=3  2025-06-11 14:42:21.142260 | orchestrator | 2025-06-11 14:42:22.599947 | orchestrator | 2025-06-11 14:42:22 | INFO  | Task 8d73d900-a70c-4758-9f45-ec72e735e6c8 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-11 14:42:22.600029 | orchestrator | 2025-06-11 14:42:22 | INFO  | It takes a moment until task 8d73d900-a70c-4758-9f45-ec72e735e6c8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-11 14:42:33.669114 | orchestrator | 2025-06-11 14:42:33.669230 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-11 14:42:33.669247 | orchestrator | 2025-06-11 14:42:33.669260 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:42:33.669272 | orchestrator | Wednesday 11 June 2025 14:42:26 +0000 (0:00:00.276) 0:00:00.276 ******** 2025-06-11 14:42:33.669283 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 14:42:33.669294 | orchestrator | 2025-06-11 14:42:33.669306 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:42:33.669317 | orchestrator | Wednesday 11 June 2025 14:42:26 +0000 (0:00:00.234) 0:00:00.510 ******** 2025-06-11 14:42:33.669327 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:42:33.669340 | orchestrator | 2025-06-11 14:42:33.669350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669361 | orchestrator | Wednesday 11 June 2025 14:42:26 +0000 (0:00:00.192) 0:00:00.703 ******** 2025-06-11 14:42:33.669372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-11 14:42:33.669383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-11 14:42:33.669394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-11 14:42:33.669405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-11 14:42:33.669416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-11 14:42:33.669426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-11 14:42:33.669438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-11 14:42:33.669449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-11 14:42:33.669460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-11 14:42:33.669471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-11 14:42:33.669482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-11 14:42:33.669492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-11 14:42:33.669503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-11 14:42:33.669513 | orchestrator | 2025-06-11 14:42:33.669525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669536 | orchestrator | Wednesday 11 June 2025 14:42:26 +0000 (0:00:00.326) 0:00:01.029 ******** 2025-06-11 14:42:33.669547 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669557 | orchestrator | 2025-06-11 14:42:33.669568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669579 | orchestrator | Wednesday 11 June 2025 14:42:27 +0000 (0:00:00.354) 0:00:01.384 ******** 2025-06-11 14:42:33.669589 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669600 | orchestrator | 2025-06-11 14:42:33.669611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669621 | orchestrator | Wednesday 11 June 2025 14:42:27 +0000 (0:00:00.165) 0:00:01.549 ******** 2025-06-11 14:42:33.669632 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669643 | orchestrator | 2025-06-11 14:42:33.669656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669668 | orchestrator | Wednesday 11 June 2025 14:42:27 +0000 (0:00:00.172) 0:00:01.722 ******** 2025-06-11 14:42:33.669680 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669692 | orchestrator | 2025-06-11 14:42:33.669726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669738 | orchestrator | Wednesday 11 June 2025 14:42:27 +0000 (0:00:00.208) 0:00:01.930 ******** 2025-06-11 14:42:33.669750 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669762 | orchestrator | 2025-06-11 14:42:33.669774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669787 | orchestrator | Wednesday 11 June 2025 14:42:28 +0000 (0:00:00.187) 0:00:02.118 ******** 2025-06-11 14:42:33.669851 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669863 | orchestrator | 2025-06-11 14:42:33.669875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669887 | orchestrator | Wednesday 11 June 2025 14:42:28 +0000 (0:00:00.203) 0:00:02.321 ******** 2025-06-11 14:42:33.669899 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669911 | orchestrator | 2025-06-11 14:42:33.669923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669935 | orchestrator | Wednesday 11 June 2025 14:42:28 +0000 (0:00:00.180) 0:00:02.502 ******** 2025-06-11 14:42:33.669947 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.669958 | orchestrator | 2025-06-11 14:42:33.669969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.669980 | orchestrator | Wednesday 11 June 2025 14:42:28 +0000 (0:00:00.170) 0:00:02.673 ******** 2025-06-11 14:42:33.669991 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb) 2025-06-11 14:42:33.670003 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb) 2025-06-11 14:42:33.670108 | orchestrator | 2025-06-11 14:42:33.670124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.670135 | orchestrator | Wednesday 11 June 2025 14:42:29 +0000 (0:00:00.385) 0:00:03.058 ******** 2025-06-11 14:42:33.670181 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299) 2025-06-11 14:42:33.670193 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299) 2025-06-11 14:42:33.670204 | orchestrator | 2025-06-11 14:42:33.670214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.670225 | orchestrator | Wednesday 11 June 2025 14:42:29 +0000 (0:00:00.501) 0:00:03.560 ******** 2025-06-11 14:42:33.670235 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3) 2025-06-11 14:42:33.670246 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3) 2025-06-11 14:42:33.670256 | orchestrator | 2025-06-11 14:42:33.670267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.670277 | orchestrator | Wednesday 11 June 2025 14:42:30 +0000 (0:00:00.617) 0:00:04.177 ******** 2025-06-11 14:42:33.670288 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc) 2025-06-11 14:42:33.670298 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc) 2025-06-11 14:42:33.670309 | orchestrator | 2025-06-11 14:42:33.670319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:33.670330 | orchestrator | Wednesday 11 June 2025 14:42:30 +0000 (0:00:00.680) 0:00:04.857 ******** 2025-06-11 14:42:33.670341 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:42:33.670351 | orchestrator | 2025-06-11 14:42:33.670361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670372 | orchestrator | Wednesday 11 June 2025 14:42:31 +0000 (0:00:00.845) 0:00:05.703 ******** 2025-06-11 14:42:33.670382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-11 14:42:33.670393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-11 14:42:33.670414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-11 14:42:33.670424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-11 14:42:33.670435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-11 14:42:33.670445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-11 14:42:33.670456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-11 14:42:33.670466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-11 14:42:33.670477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-11 14:42:33.670487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-11 14:42:33.670497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-11 14:42:33.670508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-11 14:42:33.670518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-11 14:42:33.670528 | orchestrator | 2025-06-11 14:42:33.670539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670549 | orchestrator | Wednesday 11 June 2025 14:42:32 +0000 (0:00:00.413) 0:00:06.116 ******** 2025-06-11 14:42:33.670560 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670570 | orchestrator | 2025-06-11 14:42:33.670580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670591 | orchestrator | Wednesday 11 June 2025 14:42:32 +0000 (0:00:00.211) 0:00:06.328 ******** 2025-06-11 14:42:33.670601 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670612 | orchestrator | 2025-06-11 14:42:33.670622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670633 | orchestrator | Wednesday 11 June 2025 14:42:32 +0000 (0:00:00.184) 0:00:06.512 ******** 2025-06-11 14:42:33.670649 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670660 | orchestrator | 2025-06-11 14:42:33.670670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670681 | orchestrator | Wednesday 11 June 2025 14:42:32 +0000 (0:00:00.193) 0:00:06.706 ******** 2025-06-11 14:42:33.670691 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670702 | orchestrator | 2025-06-11 14:42:33.670713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670723 | orchestrator | Wednesday 11 June 2025 14:42:32 +0000 (0:00:00.190) 0:00:06.896 ******** 2025-06-11 14:42:33.670734 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670744 | orchestrator | 2025-06-11 14:42:33.670755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670765 | orchestrator | Wednesday 11 June 2025 14:42:33 +0000 (0:00:00.214) 0:00:07.111 ******** 2025-06-11 14:42:33.670776 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670786 | orchestrator | 2025-06-11 14:42:33.670835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670847 | orchestrator | Wednesday 11 June 2025 14:42:33 +0000 (0:00:00.208) 0:00:07.319 ******** 2025-06-11 14:42:33.670858 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:33.670868 | orchestrator | 2025-06-11 14:42:33.670879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:33.670889 | orchestrator | Wednesday 11 June 2025 14:42:33 +0000 (0:00:00.180) 0:00:07.500 ******** 2025-06-11 14:42:33.670908 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203168 | orchestrator | 2025-06-11 14:42:40.203245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:40.203274 | orchestrator | Wednesday 11 June 2025 14:42:33 +0000 (0:00:00.205) 0:00:07.705 ******** 2025-06-11 14:42:40.203283 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-11 14:42:40.203293 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-11 14:42:40.203301 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-11 14:42:40.203309 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-11 14:42:40.203317 | orchestrator | 2025-06-11 14:42:40.203325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:40.203332 | orchestrator | Wednesday 11 June 2025 14:42:34 +0000 (0:00:00.923) 0:00:08.628 ******** 2025-06-11 14:42:40.203340 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203348 | orchestrator | 2025-06-11 14:42:40.203356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:40.203364 | orchestrator | Wednesday 11 June 2025 14:42:34 +0000 (0:00:00.180) 0:00:08.808 ******** 2025-06-11 14:42:40.203372 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203379 | orchestrator | 2025-06-11 14:42:40.203387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:40.203395 | orchestrator | Wednesday 11 June 2025 14:42:34 +0000 (0:00:00.180) 0:00:08.989 ******** 2025-06-11 14:42:40.203403 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203410 | orchestrator | 2025-06-11 14:42:40.203418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:40.203426 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.185) 0:00:09.175 ******** 2025-06-11 14:42:40.203433 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203441 | orchestrator | 2025-06-11 14:42:40.203449 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-11 14:42:40.203456 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.164) 0:00:09.340 ******** 2025-06-11 14:42:40.203464 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-11 14:42:40.203472 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-11 14:42:40.203479 | orchestrator | 2025-06-11 14:42:40.203487 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-11 14:42:40.203495 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.157) 0:00:09.497 ******** 2025-06-11 14:42:40.203503 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203510 | orchestrator | 2025-06-11 14:42:40.203518 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-11 14:42:40.203526 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.105) 0:00:09.603 ******** 2025-06-11 14:42:40.203533 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203541 | orchestrator | 2025-06-11 14:42:40.203548 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-11 14:42:40.203556 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.116) 0:00:09.719 ******** 2025-06-11 14:42:40.203564 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203571 | orchestrator | 2025-06-11 14:42:40.203579 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-11 14:42:40.203587 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.123) 0:00:09.842 ******** 2025-06-11 14:42:40.203595 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:42:40.203602 | orchestrator | 2025-06-11 14:42:40.203610 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-11 14:42:40.203618 | orchestrator | Wednesday 11 June 2025 14:42:35 +0000 (0:00:00.123) 0:00:09.966 ******** 2025-06-11 14:42:40.203640 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '28682609-b410-5575-84cb-1d408b8d4d4a'}}) 2025-06-11 14:42:40.203648 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6a3d2e7-9824-554b-8cae-981831ed9e32'}}) 2025-06-11 14:42:40.203656 | orchestrator | 2025-06-11 14:42:40.203664 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-11 14:42:40.203678 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.136) 0:00:10.103 ******** 2025-06-11 14:42:40.203686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '28682609-b410-5575-84cb-1d408b8d4d4a'}})  2025-06-11 14:42:40.203699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6a3d2e7-9824-554b-8cae-981831ed9e32'}})  2025-06-11 14:42:40.203707 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203714 | orchestrator | 2025-06-11 14:42:40.203722 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-11 14:42:40.203730 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.148) 0:00:10.251 ******** 2025-06-11 14:42:40.203739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '28682609-b410-5575-84cb-1d408b8d4d4a'}})  2025-06-11 14:42:40.203748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6a3d2e7-9824-554b-8cae-981831ed9e32'}})  2025-06-11 14:42:40.203757 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203766 | orchestrator | 2025-06-11 14:42:40.203775 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-11 14:42:40.203784 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.137) 0:00:10.388 ******** 2025-06-11 14:42:40.203815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '28682609-b410-5575-84cb-1d408b8d4d4a'}})  2025-06-11 14:42:40.203824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6a3d2e7-9824-554b-8cae-981831ed9e32'}})  2025-06-11 14:42:40.203834 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203842 | orchestrator | 2025-06-11 14:42:40.203865 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-11 14:42:40.203875 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.260) 0:00:10.648 ******** 2025-06-11 14:42:40.203884 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:42:40.203892 | orchestrator | 2025-06-11 14:42:40.203901 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-11 14:42:40.203910 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.113) 0:00:10.762 ******** 2025-06-11 14:42:40.203919 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:42:40.203928 | orchestrator | 2025-06-11 14:42:40.203937 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-11 14:42:40.203946 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.129) 0:00:10.891 ******** 2025-06-11 14:42:40.203955 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203964 | orchestrator | 2025-06-11 14:42:40.203972 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-11 14:42:40.203979 | orchestrator | Wednesday 11 June 2025 14:42:36 +0000 (0:00:00.100) 0:00:10.992 ******** 2025-06-11 14:42:40.203987 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.203995 | orchestrator | 2025-06-11 14:42:40.204002 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-11 14:42:40.204010 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.115) 0:00:11.108 ******** 2025-06-11 14:42:40.204018 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.204025 | orchestrator | 2025-06-11 14:42:40.204033 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-11 14:42:40.204041 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.131) 0:00:11.240 ******** 2025-06-11 14:42:40.204049 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:42:40.204056 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:42:40.204064 | orchestrator |  "sdb": { 2025-06-11 14:42:40.204072 | orchestrator |  "osd_lvm_uuid": "28682609-b410-5575-84cb-1d408b8d4d4a" 2025-06-11 14:42:40.204080 | orchestrator |  }, 2025-06-11 14:42:40.204091 | orchestrator |  "sdc": { 2025-06-11 14:42:40.204099 | orchestrator |  "osd_lvm_uuid": "b6a3d2e7-9824-554b-8cae-981831ed9e32" 2025-06-11 14:42:40.204112 | orchestrator |  } 2025-06-11 14:42:40.204120 | orchestrator |  } 2025-06-11 14:42:40.204128 | orchestrator | } 2025-06-11 14:42:40.204136 | orchestrator | 2025-06-11 14:42:40.204144 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-11 14:42:40.204152 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.131) 0:00:11.371 ******** 2025-06-11 14:42:40.204159 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.204167 | orchestrator | 2025-06-11 14:42:40.204175 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-11 14:42:40.204183 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.104) 0:00:11.475 ******** 2025-06-11 14:42:40.204190 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.204198 | orchestrator | 2025-06-11 14:42:40.204206 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-11 14:42:40.204213 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.127) 0:00:11.603 ******** 2025-06-11 14:42:40.204221 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:42:40.204229 | orchestrator | 2025-06-11 14:42:40.204236 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-11 14:42:40.204244 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.129) 0:00:11.733 ******** 2025-06-11 14:42:40.204252 | orchestrator | changed: [testbed-node-3] => { 2025-06-11 14:42:40.204259 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-11 14:42:40.204267 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:42:40.204279 | orchestrator |  "sdb": { 2025-06-11 14:42:40.204287 | orchestrator |  "osd_lvm_uuid": "28682609-b410-5575-84cb-1d408b8d4d4a" 2025-06-11 14:42:40.204295 | orchestrator |  }, 2025-06-11 14:42:40.204303 | orchestrator |  "sdc": { 2025-06-11 14:42:40.204311 | orchestrator |  "osd_lvm_uuid": "b6a3d2e7-9824-554b-8cae-981831ed9e32" 2025-06-11 14:42:40.204319 | orchestrator |  } 2025-06-11 14:42:40.204326 | orchestrator |  }, 2025-06-11 14:42:40.204334 | orchestrator |  "lvm_volumes": [ 2025-06-11 14:42:40.204342 | orchestrator |  { 2025-06-11 14:42:40.204349 | orchestrator |  "data": "osd-block-28682609-b410-5575-84cb-1d408b8d4d4a", 2025-06-11 14:42:40.204357 | orchestrator |  "data_vg": "ceph-28682609-b410-5575-84cb-1d408b8d4d4a" 2025-06-11 14:42:40.204365 | orchestrator |  }, 2025-06-11 14:42:40.204373 | orchestrator |  { 2025-06-11 14:42:40.204380 | orchestrator |  "data": "osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32", 2025-06-11 14:42:40.204388 | orchestrator |  "data_vg": "ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32" 2025-06-11 14:42:40.204396 | orchestrator |  } 2025-06-11 14:42:40.204403 | orchestrator |  ] 2025-06-11 14:42:40.204411 | orchestrator |  } 2025-06-11 14:42:40.204419 | orchestrator | } 2025-06-11 14:42:40.204427 | orchestrator | 2025-06-11 14:42:40.204434 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-11 14:42:40.204442 | orchestrator | Wednesday 11 June 2025 14:42:37 +0000 (0:00:00.190) 0:00:11.923 ******** 2025-06-11 14:42:40.204450 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 14:42:40.204457 | orchestrator | 2025-06-11 14:42:40.204465 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-11 14:42:40.204473 | orchestrator | 2025-06-11 14:42:40.204481 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:42:40.204488 | orchestrator | Wednesday 11 June 2025 14:42:39 +0000 (0:00:01.856) 0:00:13.780 ******** 2025-06-11 14:42:40.204496 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-11 14:42:40.204504 | orchestrator | 2025-06-11 14:42:40.204511 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:42:40.204519 | orchestrator | Wednesday 11 June 2025 14:42:39 +0000 (0:00:00.254) 0:00:14.034 ******** 2025-06-11 14:42:40.204527 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:42:40.204540 | orchestrator | 2025-06-11 14:42:40.204548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:40.204561 | orchestrator | Wednesday 11 June 2025 14:42:40 +0000 (0:00:00.207) 0:00:14.242 ******** 2025-06-11 14:42:47.158504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-11 14:42:47.158644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-11 14:42:47.158672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-11 14:42:47.158693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-11 14:42:47.158739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-11 14:42:47.158758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-11 14:42:47.158772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-11 14:42:47.158836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-11 14:42:47.158856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-11 14:42:47.158871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-11 14:42:47.158885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-11 14:42:47.158903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-11 14:42:47.158919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-11 14:42:47.158935 | orchestrator | 2025-06-11 14:42:47.158953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.158971 | orchestrator | Wednesday 11 June 2025 14:42:40 +0000 (0:00:00.363) 0:00:14.605 ******** 2025-06-11 14:42:47.158985 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159000 | orchestrator | 2025-06-11 14:42:47.159018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159035 | orchestrator | Wednesday 11 June 2025 14:42:40 +0000 (0:00:00.189) 0:00:14.794 ******** 2025-06-11 14:42:47.159050 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159064 | orchestrator | 2025-06-11 14:42:47.159082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159102 | orchestrator | Wednesday 11 June 2025 14:42:40 +0000 (0:00:00.186) 0:00:14.981 ******** 2025-06-11 14:42:47.159121 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159140 | orchestrator | 2025-06-11 14:42:47.159156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159171 | orchestrator | Wednesday 11 June 2025 14:42:41 +0000 (0:00:00.171) 0:00:15.153 ******** 2025-06-11 14:42:47.159184 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159198 | orchestrator | 2025-06-11 14:42:47.159212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159226 | orchestrator | Wednesday 11 June 2025 14:42:41 +0000 (0:00:00.214) 0:00:15.367 ******** 2025-06-11 14:42:47.159250 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159269 | orchestrator | 2025-06-11 14:42:47.159285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159299 | orchestrator | Wednesday 11 June 2025 14:42:41 +0000 (0:00:00.181) 0:00:15.549 ******** 2025-06-11 14:42:47.159315 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159333 | orchestrator | 2025-06-11 14:42:47.159350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159369 | orchestrator | Wednesday 11 June 2025 14:42:41 +0000 (0:00:00.472) 0:00:16.021 ******** 2025-06-11 14:42:47.159387 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159403 | orchestrator | 2025-06-11 14:42:47.159442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159475 | orchestrator | Wednesday 11 June 2025 14:42:42 +0000 (0:00:00.211) 0:00:16.232 ******** 2025-06-11 14:42:47.159493 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.159507 | orchestrator | 2025-06-11 14:42:47.159523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159537 | orchestrator | Wednesday 11 June 2025 14:42:42 +0000 (0:00:00.184) 0:00:16.417 ******** 2025-06-11 14:42:47.159551 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29) 2025-06-11 14:42:47.159569 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29) 2025-06-11 14:42:47.159583 | orchestrator | 2025-06-11 14:42:47.159597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159611 | orchestrator | Wednesday 11 June 2025 14:42:42 +0000 (0:00:00.453) 0:00:16.870 ******** 2025-06-11 14:42:47.159629 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86) 2025-06-11 14:42:47.159645 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86) 2025-06-11 14:42:47.159662 | orchestrator | 2025-06-11 14:42:47.159679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159695 | orchestrator | Wednesday 11 June 2025 14:42:43 +0000 (0:00:00.390) 0:00:17.261 ******** 2025-06-11 14:42:47.159712 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9) 2025-06-11 14:42:47.159730 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9) 2025-06-11 14:42:47.159743 | orchestrator | 2025-06-11 14:42:47.159756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159771 | orchestrator | Wednesday 11 June 2025 14:42:43 +0000 (0:00:00.404) 0:00:17.666 ******** 2025-06-11 14:42:47.159827 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b) 2025-06-11 14:42:47.159842 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b) 2025-06-11 14:42:47.159856 | orchestrator | 2025-06-11 14:42:47.159869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:47.159886 | orchestrator | Wednesday 11 June 2025 14:42:43 +0000 (0:00:00.379) 0:00:18.046 ******** 2025-06-11 14:42:47.159904 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:42:47.159920 | orchestrator | 2025-06-11 14:42:47.159938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.159955 | orchestrator | Wednesday 11 June 2025 14:42:44 +0000 (0:00:00.292) 0:00:18.338 ******** 2025-06-11 14:42:47.159970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-11 14:42:47.159984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-11 14:42:47.159998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-11 14:42:47.160015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-11 14:42:47.160033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-11 14:42:47.160051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-11 14:42:47.160068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-11 14:42:47.160084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-11 14:42:47.160097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-11 14:42:47.160123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-11 14:42:47.160140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-11 14:42:47.160153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-11 14:42:47.160165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-11 14:42:47.160179 | orchestrator | 2025-06-11 14:42:47.160192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160206 | orchestrator | Wednesday 11 June 2025 14:42:44 +0000 (0:00:00.346) 0:00:18.684 ******** 2025-06-11 14:42:47.160220 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160233 | orchestrator | 2025-06-11 14:42:47.160247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160260 | orchestrator | Wednesday 11 June 2025 14:42:44 +0000 (0:00:00.196) 0:00:18.881 ******** 2025-06-11 14:42:47.160273 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160287 | orchestrator | 2025-06-11 14:42:47.160300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160314 | orchestrator | Wednesday 11 June 2025 14:42:45 +0000 (0:00:00.482) 0:00:19.363 ******** 2025-06-11 14:42:47.160331 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160344 | orchestrator | 2025-06-11 14:42:47.160358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160374 | orchestrator | Wednesday 11 June 2025 14:42:45 +0000 (0:00:00.191) 0:00:19.554 ******** 2025-06-11 14:42:47.160391 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160408 | orchestrator | 2025-06-11 14:42:47.160426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160443 | orchestrator | Wednesday 11 June 2025 14:42:45 +0000 (0:00:00.192) 0:00:19.747 ******** 2025-06-11 14:42:47.160461 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160475 | orchestrator | 2025-06-11 14:42:47.160488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160502 | orchestrator | Wednesday 11 June 2025 14:42:45 +0000 (0:00:00.182) 0:00:19.929 ******** 2025-06-11 14:42:47.160519 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160536 | orchestrator | 2025-06-11 14:42:47.160554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160571 | orchestrator | Wednesday 11 June 2025 14:42:46 +0000 (0:00:00.180) 0:00:20.110 ******** 2025-06-11 14:42:47.160588 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160604 | orchestrator | 2025-06-11 14:42:47.160617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160631 | orchestrator | Wednesday 11 June 2025 14:42:46 +0000 (0:00:00.162) 0:00:20.272 ******** 2025-06-11 14:42:47.160648 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160663 | orchestrator | 2025-06-11 14:42:47.160678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160693 | orchestrator | Wednesday 11 June 2025 14:42:46 +0000 (0:00:00.174) 0:00:20.446 ******** 2025-06-11 14:42:47.160716 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-11 14:42:47.160731 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-11 14:42:47.160747 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-11 14:42:47.160760 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-11 14:42:47.160773 | orchestrator | 2025-06-11 14:42:47.160788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:47.160823 | orchestrator | Wednesday 11 June 2025 14:42:46 +0000 (0:00:00.591) 0:00:21.037 ******** 2025-06-11 14:42:47.160836 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:47.160849 | orchestrator | 2025-06-11 14:42:47.160872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:53.108145 | orchestrator | Wednesday 11 June 2025 14:42:47 +0000 (0:00:00.160) 0:00:21.198 ******** 2025-06-11 14:42:53.108280 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108296 | orchestrator | 2025-06-11 14:42:53.108307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:53.108317 | orchestrator | Wednesday 11 June 2025 14:42:47 +0000 (0:00:00.170) 0:00:21.368 ******** 2025-06-11 14:42:53.108327 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108337 | orchestrator | 2025-06-11 14:42:53.108347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:42:53.108357 | orchestrator | Wednesday 11 June 2025 14:42:47 +0000 (0:00:00.180) 0:00:21.549 ******** 2025-06-11 14:42:53.108366 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108376 | orchestrator | 2025-06-11 14:42:53.108385 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-11 14:42:53.108411 | orchestrator | Wednesday 11 June 2025 14:42:47 +0000 (0:00:00.180) 0:00:21.729 ******** 2025-06-11 14:42:53.108421 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-11 14:42:53.108431 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-11 14:42:53.108451 | orchestrator | 2025-06-11 14:42:53.108474 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-11 14:42:53.108485 | orchestrator | Wednesday 11 June 2025 14:42:47 +0000 (0:00:00.265) 0:00:21.995 ******** 2025-06-11 14:42:53.108494 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108504 | orchestrator | 2025-06-11 14:42:53.108513 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-11 14:42:53.108523 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.125) 0:00:22.120 ******** 2025-06-11 14:42:53.108532 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108541 | orchestrator | 2025-06-11 14:42:53.108551 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-11 14:42:53.108560 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.113) 0:00:22.234 ******** 2025-06-11 14:42:53.108570 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108579 | orchestrator | 2025-06-11 14:42:53.108589 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-11 14:42:53.108598 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.109) 0:00:22.343 ******** 2025-06-11 14:42:53.108607 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:42:53.108618 | orchestrator | 2025-06-11 14:42:53.108627 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-11 14:42:53.108637 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.128) 0:00:22.471 ******** 2025-06-11 14:42:53.108647 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd502667e-47a1-548a-a5f2-2993142d2957'}}) 2025-06-11 14:42:53.108658 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40a0a619-d38c-5879-89ae-a3eefd65fa41'}}) 2025-06-11 14:42:53.108667 | orchestrator | 2025-06-11 14:42:53.108677 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-11 14:42:53.108688 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.147) 0:00:22.619 ******** 2025-06-11 14:42:53.108700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd502667e-47a1-548a-a5f2-2993142d2957'}})  2025-06-11 14:42:53.108713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40a0a619-d38c-5879-89ae-a3eefd65fa41'}})  2025-06-11 14:42:53.108723 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108733 | orchestrator | 2025-06-11 14:42:53.108744 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-11 14:42:53.108755 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.126) 0:00:22.746 ******** 2025-06-11 14:42:53.108766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd502667e-47a1-548a-a5f2-2993142d2957'}})  2025-06-11 14:42:53.108777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40a0a619-d38c-5879-89ae-a3eefd65fa41'}})  2025-06-11 14:42:53.108842 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108863 | orchestrator | 2025-06-11 14:42:53.108882 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-11 14:42:53.108900 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.150) 0:00:22.896 ******** 2025-06-11 14:42:53.108911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd502667e-47a1-548a-a5f2-2993142d2957'}})  2025-06-11 14:42:53.108920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40a0a619-d38c-5879-89ae-a3eefd65fa41'}})  2025-06-11 14:42:53.108930 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.108939 | orchestrator | 2025-06-11 14:42:53.108949 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-11 14:42:53.108958 | orchestrator | Wednesday 11 June 2025 14:42:48 +0000 (0:00:00.122) 0:00:23.018 ******** 2025-06-11 14:42:53.108968 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:42:53.108977 | orchestrator | 2025-06-11 14:42:53.108986 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-11 14:42:53.108996 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.122) 0:00:23.141 ******** 2025-06-11 14:42:53.109006 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:42:53.109015 | orchestrator | 2025-06-11 14:42:53.109024 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-11 14:42:53.109034 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.130) 0:00:23.271 ******** 2025-06-11 14:42:53.109044 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109053 | orchestrator | 2025-06-11 14:42:53.109082 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-11 14:42:53.109092 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.120) 0:00:23.392 ******** 2025-06-11 14:42:53.109101 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109111 | orchestrator | 2025-06-11 14:42:53.109120 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-11 14:42:53.109129 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.250) 0:00:23.643 ******** 2025-06-11 14:42:53.109139 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109148 | orchestrator | 2025-06-11 14:42:53.109157 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-11 14:42:53.109167 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.133) 0:00:23.776 ******** 2025-06-11 14:42:53.109176 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:42:53.109185 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:42:53.109195 | orchestrator |  "sdb": { 2025-06-11 14:42:53.109204 | orchestrator |  "osd_lvm_uuid": "d502667e-47a1-548a-a5f2-2993142d2957" 2025-06-11 14:42:53.109214 | orchestrator |  }, 2025-06-11 14:42:53.109224 | orchestrator |  "sdc": { 2025-06-11 14:42:53.109233 | orchestrator |  "osd_lvm_uuid": "40a0a619-d38c-5879-89ae-a3eefd65fa41" 2025-06-11 14:42:53.109242 | orchestrator |  } 2025-06-11 14:42:53.109252 | orchestrator |  } 2025-06-11 14:42:53.109261 | orchestrator | } 2025-06-11 14:42:53.109271 | orchestrator | 2025-06-11 14:42:53.109280 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-11 14:42:53.109290 | orchestrator | Wednesday 11 June 2025 14:42:49 +0000 (0:00:00.144) 0:00:23.921 ******** 2025-06-11 14:42:53.109299 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109309 | orchestrator | 2025-06-11 14:42:53.109326 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-11 14:42:53.109336 | orchestrator | Wednesday 11 June 2025 14:42:50 +0000 (0:00:00.128) 0:00:24.050 ******** 2025-06-11 14:42:53.109346 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109355 | orchestrator | 2025-06-11 14:42:53.109364 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-11 14:42:53.109381 | orchestrator | Wednesday 11 June 2025 14:42:50 +0000 (0:00:00.136) 0:00:24.186 ******** 2025-06-11 14:42:53.109390 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:42:53.109399 | orchestrator | 2025-06-11 14:42:53.109409 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-11 14:42:53.109418 | orchestrator | Wednesday 11 June 2025 14:42:50 +0000 (0:00:00.143) 0:00:24.330 ******** 2025-06-11 14:42:53.109427 | orchestrator | changed: [testbed-node-4] => { 2025-06-11 14:42:53.109437 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-11 14:42:53.109446 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:42:53.109456 | orchestrator |  "sdb": { 2025-06-11 14:42:53.109465 | orchestrator |  "osd_lvm_uuid": "d502667e-47a1-548a-a5f2-2993142d2957" 2025-06-11 14:42:53.109475 | orchestrator |  }, 2025-06-11 14:42:53.109484 | orchestrator |  "sdc": { 2025-06-11 14:42:53.109493 | orchestrator |  "osd_lvm_uuid": "40a0a619-d38c-5879-89ae-a3eefd65fa41" 2025-06-11 14:42:53.109503 | orchestrator |  } 2025-06-11 14:42:53.109512 | orchestrator |  }, 2025-06-11 14:42:53.109521 | orchestrator |  "lvm_volumes": [ 2025-06-11 14:42:53.109531 | orchestrator |  { 2025-06-11 14:42:53.109540 | orchestrator |  "data": "osd-block-d502667e-47a1-548a-a5f2-2993142d2957", 2025-06-11 14:42:53.109550 | orchestrator |  "data_vg": "ceph-d502667e-47a1-548a-a5f2-2993142d2957" 2025-06-11 14:42:53.109559 | orchestrator |  }, 2025-06-11 14:42:53.109569 | orchestrator |  { 2025-06-11 14:42:53.109578 | orchestrator |  "data": "osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41", 2025-06-11 14:42:53.109587 | orchestrator |  "data_vg": "ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41" 2025-06-11 14:42:53.109597 | orchestrator |  } 2025-06-11 14:42:53.109606 | orchestrator |  ] 2025-06-11 14:42:53.109615 | orchestrator |  } 2025-06-11 14:42:53.109624 | orchestrator | } 2025-06-11 14:42:53.109634 | orchestrator | 2025-06-11 14:42:53.109643 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-11 14:42:53.109653 | orchestrator | Wednesday 11 June 2025 14:42:50 +0000 (0:00:00.215) 0:00:24.545 ******** 2025-06-11 14:42:53.109662 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-11 14:42:53.109671 | orchestrator | 2025-06-11 14:42:53.109681 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-11 14:42:53.109690 | orchestrator | 2025-06-11 14:42:53.109700 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:42:53.109709 | orchestrator | Wednesday 11 June 2025 14:42:51 +0000 (0:00:01.074) 0:00:25.620 ******** 2025-06-11 14:42:53.109718 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-11 14:42:53.109727 | orchestrator | 2025-06-11 14:42:53.109737 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:42:53.109746 | orchestrator | Wednesday 11 June 2025 14:42:52 +0000 (0:00:00.486) 0:00:26.106 ******** 2025-06-11 14:42:53.109755 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:42:53.109765 | orchestrator | 2025-06-11 14:42:53.109774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:42:53.109783 | orchestrator | Wednesday 11 June 2025 14:42:52 +0000 (0:00:00.655) 0:00:26.761 ******** 2025-06-11 14:42:53.109817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-11 14:42:53.109828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-11 14:42:53.109837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-11 14:42:53.109847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-11 14:42:53.109856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-11 14:42:53.109865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-11 14:42:53.109887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-11 14:43:01.143325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-11 14:43:01.143443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-11 14:43:01.143459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-11 14:43:01.143471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-11 14:43:01.143482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-11 14:43:01.143493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-11 14:43:01.143504 | orchestrator | 2025-06-11 14:43:01.143516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143528 | orchestrator | Wednesday 11 June 2025 14:42:53 +0000 (0:00:00.377) 0:00:27.139 ******** 2025-06-11 14:43:01.143539 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143551 | orchestrator | 2025-06-11 14:43:01.143562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143573 | orchestrator | Wednesday 11 June 2025 14:42:53 +0000 (0:00:00.212) 0:00:27.352 ******** 2025-06-11 14:43:01.143584 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143595 | orchestrator | 2025-06-11 14:43:01.143606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143616 | orchestrator | Wednesday 11 June 2025 14:42:53 +0000 (0:00:00.195) 0:00:27.548 ******** 2025-06-11 14:43:01.143627 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143638 | orchestrator | 2025-06-11 14:43:01.143648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143660 | orchestrator | Wednesday 11 June 2025 14:42:53 +0000 (0:00:00.201) 0:00:27.749 ******** 2025-06-11 14:43:01.143670 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143681 | orchestrator | 2025-06-11 14:43:01.143692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143703 | orchestrator | Wednesday 11 June 2025 14:42:53 +0000 (0:00:00.196) 0:00:27.946 ******** 2025-06-11 14:43:01.143714 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143724 | orchestrator | 2025-06-11 14:43:01.143735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143746 | orchestrator | Wednesday 11 June 2025 14:42:54 +0000 (0:00:00.197) 0:00:28.143 ******** 2025-06-11 14:43:01.143757 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143767 | orchestrator | 2025-06-11 14:43:01.143778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143830 | orchestrator | Wednesday 11 June 2025 14:42:54 +0000 (0:00:00.200) 0:00:28.344 ******** 2025-06-11 14:43:01.143844 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143858 | orchestrator | 2025-06-11 14:43:01.143870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143882 | orchestrator | Wednesday 11 June 2025 14:42:54 +0000 (0:00:00.221) 0:00:28.565 ******** 2025-06-11 14:43:01.143895 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.143907 | orchestrator | 2025-06-11 14:43:01.143920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.143933 | orchestrator | Wednesday 11 June 2025 14:42:54 +0000 (0:00:00.182) 0:00:28.748 ******** 2025-06-11 14:43:01.143945 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9) 2025-06-11 14:43:01.143959 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9) 2025-06-11 14:43:01.143971 | orchestrator | 2025-06-11 14:43:01.143984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.144021 | orchestrator | Wednesday 11 June 2025 14:42:55 +0000 (0:00:00.591) 0:00:29.340 ******** 2025-06-11 14:43:01.144034 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b) 2025-06-11 14:43:01.144046 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b) 2025-06-11 14:43:01.144058 | orchestrator | 2025-06-11 14:43:01.144070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.144082 | orchestrator | Wednesday 11 June 2025 14:42:56 +0000 (0:00:00.782) 0:00:30.122 ******** 2025-06-11 14:43:01.144095 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961) 2025-06-11 14:43:01.144108 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961) 2025-06-11 14:43:01.144121 | orchestrator | 2025-06-11 14:43:01.144150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.144163 | orchestrator | Wednesday 11 June 2025 14:42:56 +0000 (0:00:00.415) 0:00:30.537 ******** 2025-06-11 14:43:01.144175 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86) 2025-06-11 14:43:01.144188 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86) 2025-06-11 14:43:01.144200 | orchestrator | 2025-06-11 14:43:01.144213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:01.144224 | orchestrator | Wednesday 11 June 2025 14:42:56 +0000 (0:00:00.416) 0:00:30.954 ******** 2025-06-11 14:43:01.144235 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:43:01.144246 | orchestrator | 2025-06-11 14:43:01.144256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144267 | orchestrator | Wednesday 11 June 2025 14:42:57 +0000 (0:00:00.319) 0:00:31.274 ******** 2025-06-11 14:43:01.144296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-11 14:43:01.144308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-11 14:43:01.144319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-11 14:43:01.144330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-11 14:43:01.144340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-11 14:43:01.144351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-11 14:43:01.144361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-11 14:43:01.144372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-11 14:43:01.144382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-11 14:43:01.144400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-11 14:43:01.144411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-11 14:43:01.144422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-11 14:43:01.144433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-11 14:43:01.144443 | orchestrator | 2025-06-11 14:43:01.144454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144464 | orchestrator | Wednesday 11 June 2025 14:42:57 +0000 (0:00:00.380) 0:00:31.655 ******** 2025-06-11 14:43:01.144475 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144486 | orchestrator | 2025-06-11 14:43:01.144496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144516 | orchestrator | Wednesday 11 June 2025 14:42:57 +0000 (0:00:00.182) 0:00:31.838 ******** 2025-06-11 14:43:01.144527 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144538 | orchestrator | 2025-06-11 14:43:01.144548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144559 | orchestrator | Wednesday 11 June 2025 14:42:58 +0000 (0:00:00.220) 0:00:32.059 ******** 2025-06-11 14:43:01.144570 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144580 | orchestrator | 2025-06-11 14:43:01.144591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144602 | orchestrator | Wednesday 11 June 2025 14:42:58 +0000 (0:00:00.208) 0:00:32.267 ******** 2025-06-11 14:43:01.144612 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144623 | orchestrator | 2025-06-11 14:43:01.144633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144644 | orchestrator | Wednesday 11 June 2025 14:42:58 +0000 (0:00:00.198) 0:00:32.465 ******** 2025-06-11 14:43:01.144655 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144665 | orchestrator | 2025-06-11 14:43:01.144676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144687 | orchestrator | Wednesday 11 June 2025 14:42:58 +0000 (0:00:00.192) 0:00:32.657 ******** 2025-06-11 14:43:01.144697 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144708 | orchestrator | 2025-06-11 14:43:01.144719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144729 | orchestrator | Wednesday 11 June 2025 14:42:59 +0000 (0:00:00.617) 0:00:33.275 ******** 2025-06-11 14:43:01.144740 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144750 | orchestrator | 2025-06-11 14:43:01.144761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144772 | orchestrator | Wednesday 11 June 2025 14:42:59 +0000 (0:00:00.260) 0:00:33.535 ******** 2025-06-11 14:43:01.144782 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144824 | orchestrator | 2025-06-11 14:43:01.144845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144863 | orchestrator | Wednesday 11 June 2025 14:42:59 +0000 (0:00:00.217) 0:00:33.753 ******** 2025-06-11 14:43:01.144881 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-11 14:43:01.144893 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-11 14:43:01.144904 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-11 14:43:01.144915 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-11 14:43:01.144925 | orchestrator | 2025-06-11 14:43:01.144936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144946 | orchestrator | Wednesday 11 June 2025 14:43:00 +0000 (0:00:00.634) 0:00:34.387 ******** 2025-06-11 14:43:01.144957 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.144968 | orchestrator | 2025-06-11 14:43:01.144978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.144989 | orchestrator | Wednesday 11 June 2025 14:43:00 +0000 (0:00:00.198) 0:00:34.585 ******** 2025-06-11 14:43:01.144999 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.145010 | orchestrator | 2025-06-11 14:43:01.145020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.145031 | orchestrator | Wednesday 11 June 2025 14:43:00 +0000 (0:00:00.202) 0:00:34.788 ******** 2025-06-11 14:43:01.145042 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.145052 | orchestrator | 2025-06-11 14:43:01.145063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:01.145074 | orchestrator | Wednesday 11 June 2025 14:43:00 +0000 (0:00:00.194) 0:00:34.982 ******** 2025-06-11 14:43:01.145084 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:01.145095 | orchestrator | 2025-06-11 14:43:01.145106 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-11 14:43:01.145123 | orchestrator | Wednesday 11 June 2025 14:43:01 +0000 (0:00:00.198) 0:00:35.181 ******** 2025-06-11 14:43:05.329140 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-11 14:43:05.329231 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-11 14:43:05.329248 | orchestrator | 2025-06-11 14:43:05.329262 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-11 14:43:05.329273 | orchestrator | Wednesday 11 June 2025 14:43:01 +0000 (0:00:00.169) 0:00:35.350 ******** 2025-06-11 14:43:05.329284 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329295 | orchestrator | 2025-06-11 14:43:05.329306 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-11 14:43:05.329317 | orchestrator | Wednesday 11 June 2025 14:43:01 +0000 (0:00:00.135) 0:00:35.486 ******** 2025-06-11 14:43:05.329327 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329338 | orchestrator | 2025-06-11 14:43:05.329349 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-11 14:43:05.329360 | orchestrator | Wednesday 11 June 2025 14:43:01 +0000 (0:00:00.137) 0:00:35.623 ******** 2025-06-11 14:43:05.329370 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329381 | orchestrator | 2025-06-11 14:43:05.329391 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-11 14:43:05.329402 | orchestrator | Wednesday 11 June 2025 14:43:01 +0000 (0:00:00.139) 0:00:35.763 ******** 2025-06-11 14:43:05.329413 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:43:05.329424 | orchestrator | 2025-06-11 14:43:05.329434 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-11 14:43:05.329445 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.304) 0:00:36.067 ******** 2025-06-11 14:43:05.329456 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af7ee71e-f6e2-506a-9b19-157b61fbf28d'}}) 2025-06-11 14:43:05.329467 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee9e3135-eac7-54c9-a7bd-c984355157b1'}}) 2025-06-11 14:43:05.329478 | orchestrator | 2025-06-11 14:43:05.329488 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-11 14:43:05.329499 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.189) 0:00:36.256 ******** 2025-06-11 14:43:05.329510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af7ee71e-f6e2-506a-9b19-157b61fbf28d'}})  2025-06-11 14:43:05.329522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee9e3135-eac7-54c9-a7bd-c984355157b1'}})  2025-06-11 14:43:05.329533 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329544 | orchestrator | 2025-06-11 14:43:05.329554 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-11 14:43:05.329565 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.152) 0:00:36.409 ******** 2025-06-11 14:43:05.329583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af7ee71e-f6e2-506a-9b19-157b61fbf28d'}})  2025-06-11 14:43:05.329629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee9e3135-eac7-54c9-a7bd-c984355157b1'}})  2025-06-11 14:43:05.329656 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329676 | orchestrator | 2025-06-11 14:43:05.329696 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-11 14:43:05.329718 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.155) 0:00:36.564 ******** 2025-06-11 14:43:05.329745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af7ee71e-f6e2-506a-9b19-157b61fbf28d'}})  2025-06-11 14:43:05.329766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee9e3135-eac7-54c9-a7bd-c984355157b1'}})  2025-06-11 14:43:05.329786 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.329864 | orchestrator | 2025-06-11 14:43:05.329883 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-11 14:43:05.329930 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.142) 0:00:36.707 ******** 2025-06-11 14:43:05.329947 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:43:05.329960 | orchestrator | 2025-06-11 14:43:05.329972 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-11 14:43:05.329985 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.126) 0:00:36.834 ******** 2025-06-11 14:43:05.329997 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:43:05.330010 | orchestrator | 2025-06-11 14:43:05.330074 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-11 14:43:05.330085 | orchestrator | Wednesday 11 June 2025 14:43:02 +0000 (0:00:00.143) 0:00:36.977 ******** 2025-06-11 14:43:05.330096 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330107 | orchestrator | 2025-06-11 14:43:05.330118 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-11 14:43:05.330129 | orchestrator | Wednesday 11 June 2025 14:43:03 +0000 (0:00:00.132) 0:00:37.109 ******** 2025-06-11 14:43:05.330139 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330150 | orchestrator | 2025-06-11 14:43:05.330161 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-11 14:43:05.330171 | orchestrator | Wednesday 11 June 2025 14:43:03 +0000 (0:00:00.139) 0:00:37.249 ******** 2025-06-11 14:43:05.330182 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330193 | orchestrator | 2025-06-11 14:43:05.330204 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-11 14:43:05.330214 | orchestrator | Wednesday 11 June 2025 14:43:03 +0000 (0:00:00.134) 0:00:37.384 ******** 2025-06-11 14:43:05.330225 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:43:05.330236 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:43:05.330247 | orchestrator |  "sdb": { 2025-06-11 14:43:05.330273 | orchestrator |  "osd_lvm_uuid": "af7ee71e-f6e2-506a-9b19-157b61fbf28d" 2025-06-11 14:43:05.330317 | orchestrator |  }, 2025-06-11 14:43:05.330337 | orchestrator |  "sdc": { 2025-06-11 14:43:05.330359 | orchestrator |  "osd_lvm_uuid": "ee9e3135-eac7-54c9-a7bd-c984355157b1" 2025-06-11 14:43:05.330379 | orchestrator |  } 2025-06-11 14:43:05.330398 | orchestrator |  } 2025-06-11 14:43:05.330417 | orchestrator | } 2025-06-11 14:43:05.330437 | orchestrator | 2025-06-11 14:43:05.330458 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-11 14:43:05.330478 | orchestrator | Wednesday 11 June 2025 14:43:03 +0000 (0:00:00.173) 0:00:37.557 ******** 2025-06-11 14:43:05.330499 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330518 | orchestrator | 2025-06-11 14:43:05.330539 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-11 14:43:05.330558 | orchestrator | Wednesday 11 June 2025 14:43:03 +0000 (0:00:00.125) 0:00:37.683 ******** 2025-06-11 14:43:05.330577 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330596 | orchestrator | 2025-06-11 14:43:05.330616 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-11 14:43:05.330638 | orchestrator | Wednesday 11 June 2025 14:43:04 +0000 (0:00:00.423) 0:00:38.107 ******** 2025-06-11 14:43:05.330657 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:43:05.330677 | orchestrator | 2025-06-11 14:43:05.330707 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-11 14:43:05.330727 | orchestrator | Wednesday 11 June 2025 14:43:04 +0000 (0:00:00.162) 0:00:38.269 ******** 2025-06-11 14:43:05.330747 | orchestrator | changed: [testbed-node-5] => { 2025-06-11 14:43:05.330767 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-11 14:43:05.330813 | orchestrator |  "ceph_osd_devices": { 2025-06-11 14:43:05.330835 | orchestrator |  "sdb": { 2025-06-11 14:43:05.330853 | orchestrator |  "osd_lvm_uuid": "af7ee71e-f6e2-506a-9b19-157b61fbf28d" 2025-06-11 14:43:05.330873 | orchestrator |  }, 2025-06-11 14:43:05.330892 | orchestrator |  "sdc": { 2025-06-11 14:43:05.330911 | orchestrator |  "osd_lvm_uuid": "ee9e3135-eac7-54c9-a7bd-c984355157b1" 2025-06-11 14:43:05.330945 | orchestrator |  } 2025-06-11 14:43:05.330964 | orchestrator |  }, 2025-06-11 14:43:05.330982 | orchestrator |  "lvm_volumes": [ 2025-06-11 14:43:05.331000 | orchestrator |  { 2025-06-11 14:43:05.331020 | orchestrator |  "data": "osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d", 2025-06-11 14:43:05.331038 | orchestrator |  "data_vg": "ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d" 2025-06-11 14:43:05.331056 | orchestrator |  }, 2025-06-11 14:43:05.331074 | orchestrator |  { 2025-06-11 14:43:05.331092 | orchestrator |  "data": "osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1", 2025-06-11 14:43:05.331111 | orchestrator |  "data_vg": "ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1" 2025-06-11 14:43:05.331129 | orchestrator |  } 2025-06-11 14:43:05.331147 | orchestrator |  ] 2025-06-11 14:43:05.331165 | orchestrator |  } 2025-06-11 14:43:05.331184 | orchestrator | } 2025-06-11 14:43:05.331202 | orchestrator | 2025-06-11 14:43:05.331221 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-11 14:43:05.331240 | orchestrator | Wednesday 11 June 2025 14:43:04 +0000 (0:00:00.210) 0:00:38.480 ******** 2025-06-11 14:43:05.331257 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-11 14:43:05.331275 | orchestrator | 2025-06-11 14:43:05.331293 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:43:05.331314 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-11 14:43:05.331333 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-11 14:43:05.331351 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-11 14:43:05.331370 | orchestrator | 2025-06-11 14:43:05.331388 | orchestrator | 2025-06-11 14:43:05.331407 | orchestrator | 2025-06-11 14:43:05.331425 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:43:05.331443 | orchestrator | Wednesday 11 June 2025 14:43:05 +0000 (0:00:00.875) 0:00:39.356 ******** 2025-06-11 14:43:05.331462 | orchestrator | =============================================================================== 2025-06-11 14:43:05.331480 | orchestrator | Write configuration file ------------------------------------------------ 3.81s 2025-06-11 14:43:05.331499 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2025-06-11 14:43:05.331518 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2025-06-11 14:43:05.331536 | orchestrator | Get initial list of available block devices ----------------------------- 1.06s 2025-06-11 14:43:05.331554 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2025-06-11 14:43:05.331572 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2025-06-11 14:43:05.331590 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-06-11 14:43:05.331607 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-06-11 14:43:05.331624 | orchestrator | Print DB devices -------------------------------------------------------- 0.69s 2025-06-11 14:43:05.331642 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-11 14:43:05.331661 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-11 14:43:05.331680 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-06-11 14:43:05.331699 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-06-11 14:43:05.331717 | orchestrator | Print configuration data ------------------------------------------------ 0.62s 2025-06-11 14:43:05.331753 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-06-11 14:43:05.531988 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.59s 2025-06-11 14:43:05.532071 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-06-11 14:43:05.532084 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-06-11 14:43:05.532096 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.53s 2025-06-11 14:43:05.532107 | orchestrator | Set WAL devices config data --------------------------------------------- 0.51s 2025-06-11 14:43:17.371059 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:43:17.371153 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:43:17.371168 | orchestrator | Registering Redlock._release_script 2025-06-11 14:43:17.421463 | orchestrator | 2025-06-11 14:43:17 | INFO  | Task c92c9699-0452-421d-b3a8-c4d2f69fc5da (sync inventory) is running in background. Output coming soon. 2025-06-11 14:43:35.908231 | orchestrator | 2025-06-11 14:43:18 | INFO  | Starting group_vars file reorganization 2025-06-11 14:43:35.908326 | orchestrator | 2025-06-11 14:43:18 | INFO  | Moved 0 file(s) to their respective directories 2025-06-11 14:43:35.908343 | orchestrator | 2025-06-11 14:43:18 | INFO  | Group_vars file reorganization completed 2025-06-11 14:43:35.908355 | orchestrator | 2025-06-11 14:43:20 | INFO  | Starting variable preparation from inventory 2025-06-11 14:43:35.908366 | orchestrator | 2025-06-11 14:43:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-11 14:43:35.908377 | orchestrator | 2025-06-11 14:43:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-11 14:43:35.908388 | orchestrator | 2025-06-11 14:43:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-11 14:43:35.908398 | orchestrator | 2025-06-11 14:43:21 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-11 14:43:35.908409 | orchestrator | 2025-06-11 14:43:21 | INFO  | Variable preparation completed: 2025-06-11 14:43:35.908420 | orchestrator | 2025-06-11 14:43:22 | INFO  | Starting inventory overwrite handling 2025-06-11 14:43:35.908431 | orchestrator | 2025-06-11 14:43:22 | INFO  | Handling group overwrites in 99-overwrite 2025-06-11 14:43:35.908442 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group frr:children from 60-generic 2025-06-11 14:43:35.908470 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group storage:children from 50-kolla 2025-06-11 14:43:35.908481 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-11 14:43:35.908492 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-11 14:43:35.908503 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-11 14:43:35.908514 | orchestrator | 2025-06-11 14:43:22 | INFO  | Handling group overwrites in 20-roles 2025-06-11 14:43:35.908525 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-11 14:43:35.908535 | orchestrator | 2025-06-11 14:43:22 | INFO  | Removed 6 group(s) in total 2025-06-11 14:43:35.908553 | orchestrator | 2025-06-11 14:43:22 | INFO  | Inventory overwrite handling completed 2025-06-11 14:43:35.908572 | orchestrator | 2025-06-11 14:43:23 | INFO  | Starting merge of inventory files 2025-06-11 14:43:35.908591 | orchestrator | 2025-06-11 14:43:23 | INFO  | Inventory files merged successfully 2025-06-11 14:43:35.908609 | orchestrator | 2025-06-11 14:43:28 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-11 14:43:35.908628 | orchestrator | 2025-06-11 14:43:34 | INFO  | Successfully wrote ClusterShell configuration 2025-06-11 14:43:35.908677 | orchestrator | [master 03b9a6d] 2025-06-11-14-43 2025-06-11 14:43:35.908700 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-11 14:43:37.642396 | orchestrator | 2025-06-11 14:43:37 | INFO  | Task b1182a7d-f2db-419f-9d44-d78bd5d73976 (ceph-create-lvm-devices) was prepared for execution. 2025-06-11 14:43:37.642499 | orchestrator | 2025-06-11 14:43:37 | INFO  | It takes a moment until task b1182a7d-f2db-419f-9d44-d78bd5d73976 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-11 14:43:49.220131 | orchestrator | 2025-06-11 14:43:49.220274 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-11 14:43:49.220294 | orchestrator | 2025-06-11 14:43:49.220339 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:43:49.220365 | orchestrator | Wednesday 11 June 2025 14:43:41 +0000 (0:00:00.304) 0:00:00.304 ******** 2025-06-11 14:43:49.220377 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 14:43:49.220388 | orchestrator | 2025-06-11 14:43:49.220400 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:43:49.220411 | orchestrator | Wednesday 11 June 2025 14:43:42 +0000 (0:00:00.237) 0:00:00.542 ******** 2025-06-11 14:43:49.220422 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:43:49.220434 | orchestrator | 2025-06-11 14:43:49.220445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220456 | orchestrator | Wednesday 11 June 2025 14:43:42 +0000 (0:00:00.226) 0:00:00.768 ******** 2025-06-11 14:43:49.220467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-11 14:43:49.220479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-11 14:43:49.220489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-11 14:43:49.220500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-11 14:43:49.220511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-11 14:43:49.220522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-11 14:43:49.220532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-11 14:43:49.220559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-11 14:43:49.220571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-11 14:43:49.220581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-11 14:43:49.220593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-11 14:43:49.220604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-11 14:43:49.220615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-11 14:43:49.220625 | orchestrator | 2025-06-11 14:43:49.220636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220647 | orchestrator | Wednesday 11 June 2025 14:43:42 +0000 (0:00:00.397) 0:00:01.166 ******** 2025-06-11 14:43:49.220660 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.220673 | orchestrator | 2025-06-11 14:43:49.220687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220708 | orchestrator | Wednesday 11 June 2025 14:43:43 +0000 (0:00:00.465) 0:00:01.631 ******** 2025-06-11 14:43:49.220728 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.220748 | orchestrator | 2025-06-11 14:43:49.220767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220814 | orchestrator | Wednesday 11 June 2025 14:43:43 +0000 (0:00:00.193) 0:00:01.825 ******** 2025-06-11 14:43:49.220868 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.220889 | orchestrator | 2025-06-11 14:43:49.220907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220920 | orchestrator | Wednesday 11 June 2025 14:43:43 +0000 (0:00:00.207) 0:00:02.033 ******** 2025-06-11 14:43:49.220939 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.220957 | orchestrator | 2025-06-11 14:43:49.220976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.220995 | orchestrator | Wednesday 11 June 2025 14:43:43 +0000 (0:00:00.191) 0:00:02.224 ******** 2025-06-11 14:43:49.221015 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221034 | orchestrator | 2025-06-11 14:43:49.221053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221071 | orchestrator | Wednesday 11 June 2025 14:43:43 +0000 (0:00:00.206) 0:00:02.430 ******** 2025-06-11 14:43:49.221086 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221099 | orchestrator | 2025-06-11 14:43:49.221118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221136 | orchestrator | Wednesday 11 June 2025 14:43:44 +0000 (0:00:00.204) 0:00:02.635 ******** 2025-06-11 14:43:49.221160 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221185 | orchestrator | 2025-06-11 14:43:49.221203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221221 | orchestrator | Wednesday 11 June 2025 14:43:44 +0000 (0:00:00.215) 0:00:02.851 ******** 2025-06-11 14:43:49.221240 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221277 | orchestrator | 2025-06-11 14:43:49.221289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221300 | orchestrator | Wednesday 11 June 2025 14:43:44 +0000 (0:00:00.194) 0:00:03.046 ******** 2025-06-11 14:43:49.221310 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb) 2025-06-11 14:43:49.221323 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb) 2025-06-11 14:43:49.221334 | orchestrator | 2025-06-11 14:43:49.221344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221355 | orchestrator | Wednesday 11 June 2025 14:43:44 +0000 (0:00:00.404) 0:00:03.450 ******** 2025-06-11 14:43:49.221387 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299) 2025-06-11 14:43:49.221399 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299) 2025-06-11 14:43:49.221410 | orchestrator | 2025-06-11 14:43:49.221420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221438 | orchestrator | Wednesday 11 June 2025 14:43:45 +0000 (0:00:00.409) 0:00:03.860 ******** 2025-06-11 14:43:49.221457 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3) 2025-06-11 14:43:49.221475 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3) 2025-06-11 14:43:49.221501 | orchestrator | 2025-06-11 14:43:49.221524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221544 | orchestrator | Wednesday 11 June 2025 14:43:45 +0000 (0:00:00.611) 0:00:04.472 ******** 2025-06-11 14:43:49.221561 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc) 2025-06-11 14:43:49.221577 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc) 2025-06-11 14:43:49.221588 | orchestrator | 2025-06-11 14:43:49.221599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:43:49.221610 | orchestrator | Wednesday 11 June 2025 14:43:46 +0000 (0:00:00.626) 0:00:05.098 ******** 2025-06-11 14:43:49.221620 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:43:49.221631 | orchestrator | 2025-06-11 14:43:49.221653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.221664 | orchestrator | Wednesday 11 June 2025 14:43:47 +0000 (0:00:00.692) 0:00:05.791 ******** 2025-06-11 14:43:49.221674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-11 14:43:49.221685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-11 14:43:49.221695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-11 14:43:49.221706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-11 14:43:49.221717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-11 14:43:49.221727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-11 14:43:49.221738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-11 14:43:49.221749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-11 14:43:49.221759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-11 14:43:49.221769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-11 14:43:49.221818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-11 14:43:49.221832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-11 14:43:49.221842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-11 14:43:49.221853 | orchestrator | 2025-06-11 14:43:49.221864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.221874 | orchestrator | Wednesday 11 June 2025 14:43:47 +0000 (0:00:00.414) 0:00:06.205 ******** 2025-06-11 14:43:49.221885 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221896 | orchestrator | 2025-06-11 14:43:49.221914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.221933 | orchestrator | Wednesday 11 June 2025 14:43:47 +0000 (0:00:00.184) 0:00:06.389 ******** 2025-06-11 14:43:49.221951 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.221978 | orchestrator | 2025-06-11 14:43:49.222000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222083 | orchestrator | Wednesday 11 June 2025 14:43:48 +0000 (0:00:00.195) 0:00:06.584 ******** 2025-06-11 14:43:49.222114 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.222137 | orchestrator | 2025-06-11 14:43:49.222154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222173 | orchestrator | Wednesday 11 June 2025 14:43:48 +0000 (0:00:00.185) 0:00:06.769 ******** 2025-06-11 14:43:49.222191 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.222210 | orchestrator | 2025-06-11 14:43:49.222243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222263 | orchestrator | Wednesday 11 June 2025 14:43:48 +0000 (0:00:00.190) 0:00:06.960 ******** 2025-06-11 14:43:49.222313 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.222331 | orchestrator | 2025-06-11 14:43:49.222349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222367 | orchestrator | Wednesday 11 June 2025 14:43:48 +0000 (0:00:00.185) 0:00:07.146 ******** 2025-06-11 14:43:49.222385 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.222414 | orchestrator | 2025-06-11 14:43:49.222433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222451 | orchestrator | Wednesday 11 June 2025 14:43:48 +0000 (0:00:00.190) 0:00:07.336 ******** 2025-06-11 14:43:49.222470 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:49.222501 | orchestrator | 2025-06-11 14:43:49.222512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:49.222523 | orchestrator | Wednesday 11 June 2025 14:43:49 +0000 (0:00:00.196) 0:00:07.532 ******** 2025-06-11 14:43:49.222546 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.998920 | orchestrator | 2025-06-11 14:43:56.999037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:56.999054 | orchestrator | Wednesday 11 June 2025 14:43:49 +0000 (0:00:00.202) 0:00:07.734 ******** 2025-06-11 14:43:56.999066 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-11 14:43:56.999079 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-11 14:43:56.999090 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-11 14:43:56.999101 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-11 14:43:56.999113 | orchestrator | 2025-06-11 14:43:56.999124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:56.999135 | orchestrator | Wednesday 11 June 2025 14:43:50 +0000 (0:00:00.982) 0:00:08.717 ******** 2025-06-11 14:43:56.999145 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999156 | orchestrator | 2025-06-11 14:43:56.999167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:56.999178 | orchestrator | Wednesday 11 June 2025 14:43:50 +0000 (0:00:00.188) 0:00:08.906 ******** 2025-06-11 14:43:56.999189 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999199 | orchestrator | 2025-06-11 14:43:56.999210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:56.999221 | orchestrator | Wednesday 11 June 2025 14:43:50 +0000 (0:00:00.200) 0:00:09.106 ******** 2025-06-11 14:43:56.999232 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999242 | orchestrator | 2025-06-11 14:43:56.999253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:43:56.999264 | orchestrator | Wednesday 11 June 2025 14:43:50 +0000 (0:00:00.205) 0:00:09.312 ******** 2025-06-11 14:43:56.999274 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999285 | orchestrator | 2025-06-11 14:43:56.999295 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-11 14:43:56.999323 | orchestrator | Wednesday 11 June 2025 14:43:50 +0000 (0:00:00.205) 0:00:09.517 ******** 2025-06-11 14:43:56.999336 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999348 | orchestrator | 2025-06-11 14:43:56.999361 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-11 14:43:56.999373 | orchestrator | Wednesday 11 June 2025 14:43:51 +0000 (0:00:00.141) 0:00:09.659 ******** 2025-06-11 14:43:56.999386 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '28682609-b410-5575-84cb-1d408b8d4d4a'}}) 2025-06-11 14:43:56.999399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6a3d2e7-9824-554b-8cae-981831ed9e32'}}) 2025-06-11 14:43:56.999419 | orchestrator | 2025-06-11 14:43:56.999431 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-11 14:43:56.999444 | orchestrator | Wednesday 11 June 2025 14:43:51 +0000 (0:00:00.192) 0:00:09.851 ******** 2025-06-11 14:43:56.999457 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'}) 2025-06-11 14:43:56.999474 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'}) 2025-06-11 14:43:56.999494 | orchestrator | 2025-06-11 14:43:56.999515 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-11 14:43:56.999536 | orchestrator | Wednesday 11 June 2025 14:43:53 +0000 (0:00:02.028) 0:00:11.880 ******** 2025-06-11 14:43:56.999557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:56.999606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:56.999619 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999631 | orchestrator | 2025-06-11 14:43:56.999643 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-11 14:43:56.999655 | orchestrator | Wednesday 11 June 2025 14:43:53 +0000 (0:00:00.153) 0:00:12.034 ******** 2025-06-11 14:43:56.999667 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'}) 2025-06-11 14:43:56.999693 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'}) 2025-06-11 14:43:56.999704 | orchestrator | 2025-06-11 14:43:56.999714 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-11 14:43:56.999725 | orchestrator | Wednesday 11 June 2025 14:43:54 +0000 (0:00:01.437) 0:00:13.471 ******** 2025-06-11 14:43:56.999735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:56.999746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:56.999757 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999767 | orchestrator | 2025-06-11 14:43:56.999806 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-11 14:43:56.999825 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.147) 0:00:13.618 ******** 2025-06-11 14:43:56.999839 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999849 | orchestrator | 2025-06-11 14:43:56.999860 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-11 14:43:56.999891 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.128) 0:00:13.746 ******** 2025-06-11 14:43:56.999911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:56.999931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:56.999944 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:56.999955 | orchestrator | 2025-06-11 14:43:56.999966 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-11 14:43:56.999981 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.335) 0:00:14.082 ******** 2025-06-11 14:43:57 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000018 | orchestrator | 2025-06-11 14:43:57.000038 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-11 14:43:57.000054 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.137) 0:00:14.219 ******** 2025-06-11 14:43:57.000066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:57.000076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:57.000088 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000105 | orchestrator | 2025-06-11 14:43:57.000116 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-11 14:43:57.000127 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.158) 0:00:14.378 ******** 2025-06-11 14:43:57.000138 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000149 | orchestrator | 2025-06-11 14:43:57.000164 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-11 14:43:57.000183 | orchestrator | Wednesday 11 June 2025 14:43:55 +0000 (0:00:00.141) 0:00:14.519 ******** 2025-06-11 14:43:57.000209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:57.000220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:57.000234 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000254 | orchestrator | 2025-06-11 14:43:57.000273 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-11 14:43:57.000286 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.150) 0:00:14.670 ******** 2025-06-11 14:43:57.000297 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:43:57.000308 | orchestrator | 2025-06-11 14:43:57.000318 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-11 14:43:57.000329 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.139) 0:00:14.810 ******** 2025-06-11 14:43:57.000339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:57.000350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:57.000361 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000371 | orchestrator | 2025-06-11 14:43:57.000382 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-11 14:43:57.000392 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.157) 0:00:14.967 ******** 2025-06-11 14:43:57.000403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:57.000413 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:57.000424 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000434 | orchestrator | 2025-06-11 14:43:57.000445 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-11 14:43:57.000455 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.152) 0:00:15.120 ******** 2025-06-11 14:43:57.000466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:43:57.000477 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:43:57.000487 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000498 | orchestrator | 2025-06-11 14:43:57.000508 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-11 14:43:57.000519 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.146) 0:00:15.266 ******** 2025-06-11 14:43:57.000533 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000551 | orchestrator | 2025-06-11 14:43:57.000566 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-11 14:43:57.000577 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.126) 0:00:15.393 ******** 2025-06-11 14:43:57.000588 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:43:57.000599 | orchestrator | 2025-06-11 14:43:57.000616 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-11 14:44:03.241542 | orchestrator | Wednesday 11 June 2025 14:43:56 +0000 (0:00:00.123) 0:00:15.517 ******** 2025-06-11 14:44:03.241652 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.241663 | orchestrator | 2025-06-11 14:44:03.241670 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-11 14:44:03.241677 | orchestrator | Wednesday 11 June 2025 14:43:57 +0000 (0:00:00.136) 0:00:15.653 ******** 2025-06-11 14:44:03.241699 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:44:03.241705 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-11 14:44:03.241712 | orchestrator | } 2025-06-11 14:44:03.241718 | orchestrator | 2025-06-11 14:44:03.241724 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-11 14:44:03.241730 | orchestrator | Wednesday 11 June 2025 14:43:57 +0000 (0:00:00.353) 0:00:16.007 ******** 2025-06-11 14:44:03.241735 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:44:03.241741 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-11 14:44:03.241747 | orchestrator | } 2025-06-11 14:44:03.241753 | orchestrator | 2025-06-11 14:44:03.241758 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-11 14:44:03.241764 | orchestrator | Wednesday 11 June 2025 14:43:57 +0000 (0:00:00.131) 0:00:16.138 ******** 2025-06-11 14:44:03.241770 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:44:03.241804 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-11 14:44:03.241811 | orchestrator | } 2025-06-11 14:44:03.241816 | orchestrator | 2025-06-11 14:44:03.241822 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-11 14:44:03.241828 | orchestrator | Wednesday 11 June 2025 14:43:57 +0000 (0:00:00.142) 0:00:16.281 ******** 2025-06-11 14:44:03.241834 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:03.241840 | orchestrator | 2025-06-11 14:44:03.241859 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-11 14:44:03.241868 | orchestrator | Wednesday 11 June 2025 14:43:58 +0000 (0:00:00.673) 0:00:16.955 ******** 2025-06-11 14:44:03.241874 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:03.241880 | orchestrator | 2025-06-11 14:44:03.241886 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-11 14:44:03.241892 | orchestrator | Wednesday 11 June 2025 14:43:59 +0000 (0:00:00.623) 0:00:17.578 ******** 2025-06-11 14:44:03.241907 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:03.241913 | orchestrator | 2025-06-11 14:44:03.241918 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-11 14:44:03.241924 | orchestrator | Wednesday 11 June 2025 14:43:59 +0000 (0:00:00.486) 0:00:18.065 ******** 2025-06-11 14:44:03.241936 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:03.241942 | orchestrator | 2025-06-11 14:44:03.241948 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-11 14:44:03.241954 | orchestrator | Wednesday 11 June 2025 14:43:59 +0000 (0:00:00.137) 0:00:18.202 ******** 2025-06-11 14:44:03.241959 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.241965 | orchestrator | 2025-06-11 14:44:03.241971 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-11 14:44:03.241977 | orchestrator | Wednesday 11 June 2025 14:43:59 +0000 (0:00:00.117) 0:00:18.320 ******** 2025-06-11 14:44:03.241982 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.241988 | orchestrator | 2025-06-11 14:44:03.241993 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-11 14:44:03.241999 | orchestrator | Wednesday 11 June 2025 14:43:59 +0000 (0:00:00.097) 0:00:18.417 ******** 2025-06-11 14:44:03.242005 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:44:03.242010 | orchestrator |  "vgs_report": { 2025-06-11 14:44:03.242054 | orchestrator |  "vg": [] 2025-06-11 14:44:03.242072 | orchestrator |  } 2025-06-11 14:44:03.242077 | orchestrator | } 2025-06-11 14:44:03.242083 | orchestrator | 2025-06-11 14:44:03.242089 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-11 14:44:03.242095 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.140) 0:00:18.557 ******** 2025-06-11 14:44:03.242101 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242108 | orchestrator | 2025-06-11 14:44:03.242115 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-11 14:44:03.242122 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.143) 0:00:18.701 ******** 2025-06-11 14:44:03.242135 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242141 | orchestrator | 2025-06-11 14:44:03.242148 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-11 14:44:03.242154 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.133) 0:00:18.835 ******** 2025-06-11 14:44:03.242161 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242168 | orchestrator | 2025-06-11 14:44:03.242174 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-11 14:44:03.242181 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.143) 0:00:18.978 ******** 2025-06-11 14:44:03.242187 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242194 | orchestrator | 2025-06-11 14:44:03.242201 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-11 14:44:03.242215 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.324) 0:00:19.303 ******** 2025-06-11 14:44:03.242222 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242228 | orchestrator | 2025-06-11 14:44:03.242235 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-11 14:44:03.242242 | orchestrator | Wednesday 11 June 2025 14:44:00 +0000 (0:00:00.136) 0:00:19.439 ******** 2025-06-11 14:44:03.242248 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242255 | orchestrator | 2025-06-11 14:44:03.242261 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-11 14:44:03.242268 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.131) 0:00:19.570 ******** 2025-06-11 14:44:03.242275 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242281 | orchestrator | 2025-06-11 14:44:03.242288 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-11 14:44:03.242295 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.140) 0:00:19.711 ******** 2025-06-11 14:44:03.242301 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242308 | orchestrator | 2025-06-11 14:44:03.242315 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-11 14:44:03.242334 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.127) 0:00:19.838 ******** 2025-06-11 14:44:03.242341 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242348 | orchestrator | 2025-06-11 14:44:03.242355 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-11 14:44:03.242361 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.134) 0:00:19.973 ******** 2025-06-11 14:44:03.242368 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242375 | orchestrator | 2025-06-11 14:44:03.242381 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-11 14:44:03.242388 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.140) 0:00:20.113 ******** 2025-06-11 14:44:03.242395 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242402 | orchestrator | 2025-06-11 14:44:03.242408 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-11 14:44:03.242415 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.139) 0:00:20.253 ******** 2025-06-11 14:44:03.242422 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242428 | orchestrator | 2025-06-11 14:44:03.242435 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-11 14:44:03.242441 | orchestrator | Wednesday 11 June 2025 14:44:01 +0000 (0:00:00.140) 0:00:20.394 ******** 2025-06-11 14:44:03.242448 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242455 | orchestrator | 2025-06-11 14:44:03.242462 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-11 14:44:03.242469 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.137) 0:00:20.532 ******** 2025-06-11 14:44:03.242476 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242482 | orchestrator | 2025-06-11 14:44:03.242487 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-11 14:44:03.242496 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.134) 0:00:20.667 ******** 2025-06-11 14:44:03.242507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:03.242520 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242526 | orchestrator | 2025-06-11 14:44:03.242568 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-11 14:44:03.242575 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.137) 0:00:20.805 ******** 2025-06-11 14:44:03.242581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:03.242593 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242599 | orchestrator | 2025-06-11 14:44:03.242605 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-11 14:44:03.242610 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.342) 0:00:21.147 ******** 2025-06-11 14:44:03.242616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:03.242628 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242634 | orchestrator | 2025-06-11 14:44:03.242640 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-11 14:44:03.242645 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.160) 0:00:21.308 ******** 2025-06-11 14:44:03.242651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:03.242663 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242669 | orchestrator | 2025-06-11 14:44:03.242675 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-11 14:44:03.242680 | orchestrator | Wednesday 11 June 2025 14:44:02 +0000 (0:00:00.158) 0:00:21.466 ******** 2025-06-11 14:44:03.242686 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:03.242698 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:03.242704 | orchestrator | 2025-06-11 14:44:03.242709 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-11 14:44:03.242715 | orchestrator | Wednesday 11 June 2025 14:44:03 +0000 (0:00:00.145) 0:00:21.611 ******** 2025-06-11 14:44:03.242721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:03.242732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.530925 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531032 | orchestrator | 2025-06-11 14:44:08.531050 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-11 14:44:08.531094 | orchestrator | Wednesday 11 June 2025 14:44:03 +0000 (0:00:00.148) 0:00:21.760 ******** 2025-06-11 14:44:08.531115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:08.531134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.531151 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531163 | orchestrator | 2025-06-11 14:44:08.531174 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-11 14:44:08.531185 | orchestrator | Wednesday 11 June 2025 14:44:03 +0000 (0:00:00.189) 0:00:21.949 ******** 2025-06-11 14:44:08.531195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:08.531206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.531217 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531228 | orchestrator | 2025-06-11 14:44:08.531238 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-11 14:44:08.531249 | orchestrator | Wednesday 11 June 2025 14:44:03 +0000 (0:00:00.151) 0:00:22.101 ******** 2025-06-11 14:44:08.531260 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:08.531272 | orchestrator | 2025-06-11 14:44:08.531282 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-11 14:44:08.531293 | orchestrator | Wednesday 11 June 2025 14:44:04 +0000 (0:00:00.532) 0:00:22.633 ******** 2025-06-11 14:44:08.531303 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:08.531314 | orchestrator | 2025-06-11 14:44:08.531325 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-11 14:44:08.531336 | orchestrator | Wednesday 11 June 2025 14:44:04 +0000 (0:00:00.529) 0:00:23.163 ******** 2025-06-11 14:44:08.531349 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:44:08.531361 | orchestrator | 2025-06-11 14:44:08.531373 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-11 14:44:08.531385 | orchestrator | Wednesday 11 June 2025 14:44:04 +0000 (0:00:00.145) 0:00:23.309 ******** 2025-06-11 14:44:08.531398 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'vg_name': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'}) 2025-06-11 14:44:08.531412 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'vg_name': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'}) 2025-06-11 14:44:08.531424 | orchestrator | 2025-06-11 14:44:08.531437 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-11 14:44:08.531449 | orchestrator | Wednesday 11 June 2025 14:44:04 +0000 (0:00:00.167) 0:00:23.477 ******** 2025-06-11 14:44:08.531461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:08.531474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.531486 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531532 | orchestrator | 2025-06-11 14:44:08.531545 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-11 14:44:08.531557 | orchestrator | Wednesday 11 June 2025 14:44:05 +0000 (0:00:00.166) 0:00:23.643 ******** 2025-06-11 14:44:08.531570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:08.531582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.531606 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531618 | orchestrator | 2025-06-11 14:44:08.531630 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-11 14:44:08.531643 | orchestrator | Wednesday 11 June 2025 14:44:05 +0000 (0:00:00.328) 0:00:23.972 ******** 2025-06-11 14:44:08.531654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'})  2025-06-11 14:44:08.531667 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'})  2025-06-11 14:44:08.531679 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:44:08.531691 | orchestrator | 2025-06-11 14:44:08.531702 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-11 14:44:08.531713 | orchestrator | Wednesday 11 June 2025 14:44:05 +0000 (0:00:00.155) 0:00:24.127 ******** 2025-06-11 14:44:08.531723 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 14:44:08.531734 | orchestrator |  "lvm_report": { 2025-06-11 14:44:08.531745 | orchestrator |  "lv": [ 2025-06-11 14:44:08.531756 | orchestrator |  { 2025-06-11 14:44:08.531820 | orchestrator |  "lv_name": "osd-block-28682609-b410-5575-84cb-1d408b8d4d4a", 2025-06-11 14:44:08.531844 | orchestrator |  "vg_name": "ceph-28682609-b410-5575-84cb-1d408b8d4d4a" 2025-06-11 14:44:08.531864 | orchestrator |  }, 2025-06-11 14:44:08.531876 | orchestrator |  { 2025-06-11 14:44:08.531905 | orchestrator |  "lv_name": "osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32", 2025-06-11 14:44:08.531916 | orchestrator |  "vg_name": "ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32" 2025-06-11 14:44:08.531927 | orchestrator |  } 2025-06-11 14:44:08.531938 | orchestrator |  ], 2025-06-11 14:44:08.531949 | orchestrator |  "pv": [ 2025-06-11 14:44:08.531959 | orchestrator |  { 2025-06-11 14:44:08.531970 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-11 14:44:08.531981 | orchestrator |  "vg_name": "ceph-28682609-b410-5575-84cb-1d408b8d4d4a" 2025-06-11 14:44:08.531991 | orchestrator |  }, 2025-06-11 14:44:08.532002 | orchestrator |  { 2025-06-11 14:44:08.532013 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-11 14:44:08.532023 | orchestrator |  "vg_name": "ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32" 2025-06-11 14:44:08.532034 | orchestrator |  } 2025-06-11 14:44:08.532044 | orchestrator |  ] 2025-06-11 14:44:08.532055 | orchestrator |  } 2025-06-11 14:44:08.532066 | orchestrator | } 2025-06-11 14:44:08.532077 | orchestrator | 2025-06-11 14:44:08.532087 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-11 14:44:08.532098 | orchestrator | 2025-06-11 14:44:08.532109 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:44:08.532119 | orchestrator | Wednesday 11 June 2025 14:44:05 +0000 (0:00:00.277) 0:00:24.404 ******** 2025-06-11 14:44:08.532135 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-11 14:44:08.532146 | orchestrator | 2025-06-11 14:44:08.532157 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:44:08.532167 | orchestrator | Wednesday 11 June 2025 14:44:06 +0000 (0:00:00.247) 0:00:24.652 ******** 2025-06-11 14:44:08.532178 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:08.532188 | orchestrator | 2025-06-11 14:44:08.532199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532210 | orchestrator | Wednesday 11 June 2025 14:44:06 +0000 (0:00:00.240) 0:00:24.892 ******** 2025-06-11 14:44:08.532220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-11 14:44:08.532231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-11 14:44:08.532250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-11 14:44:08.532261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-11 14:44:08.532271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-11 14:44:08.532282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-11 14:44:08.532292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-11 14:44:08.532303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-11 14:44:08.532313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-11 14:44:08.532324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-11 14:44:08.532334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-11 14:44:08.532345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-11 14:44:08.532355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-11 14:44:08.532366 | orchestrator | 2025-06-11 14:44:08.532376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532387 | orchestrator | Wednesday 11 June 2025 14:44:06 +0000 (0:00:00.424) 0:00:25.317 ******** 2025-06-11 14:44:08.532397 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532408 | orchestrator | 2025-06-11 14:44:08.532419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532429 | orchestrator | Wednesday 11 June 2025 14:44:06 +0000 (0:00:00.206) 0:00:25.523 ******** 2025-06-11 14:44:08.532440 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532450 | orchestrator | 2025-06-11 14:44:08.532461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532471 | orchestrator | Wednesday 11 June 2025 14:44:07 +0000 (0:00:00.196) 0:00:25.720 ******** 2025-06-11 14:44:08.532482 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532493 | orchestrator | 2025-06-11 14:44:08.532503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532514 | orchestrator | Wednesday 11 June 2025 14:44:07 +0000 (0:00:00.178) 0:00:25.899 ******** 2025-06-11 14:44:08.532524 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532535 | orchestrator | 2025-06-11 14:44:08.532545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532556 | orchestrator | Wednesday 11 June 2025 14:44:07 +0000 (0:00:00.547) 0:00:26.446 ******** 2025-06-11 14:44:08.532566 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532577 | orchestrator | 2025-06-11 14:44:08.532588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532598 | orchestrator | Wednesday 11 June 2025 14:44:08 +0000 (0:00:00.207) 0:00:26.653 ******** 2025-06-11 14:44:08.532609 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532619 | orchestrator | 2025-06-11 14:44:08.532630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:08.532640 | orchestrator | Wednesday 11 June 2025 14:44:08 +0000 (0:00:00.202) 0:00:26.856 ******** 2025-06-11 14:44:08.532651 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:08.532662 | orchestrator | 2025-06-11 14:44:08.532679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926460 | orchestrator | Wednesday 11 June 2025 14:44:08 +0000 (0:00:00.192) 0:00:27.049 ******** 2025-06-11 14:44:18.926566 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.926582 | orchestrator | 2025-06-11 14:44:18.926594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926606 | orchestrator | Wednesday 11 June 2025 14:44:08 +0000 (0:00:00.213) 0:00:27.263 ******** 2025-06-11 14:44:18.926637 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29) 2025-06-11 14:44:18.926650 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29) 2025-06-11 14:44:18.926662 | orchestrator | 2025-06-11 14:44:18.926673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926684 | orchestrator | Wednesday 11 June 2025 14:44:09 +0000 (0:00:00.428) 0:00:27.691 ******** 2025-06-11 14:44:18.926695 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86) 2025-06-11 14:44:18.926706 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86) 2025-06-11 14:44:18.926717 | orchestrator | 2025-06-11 14:44:18.926728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926739 | orchestrator | Wednesday 11 June 2025 14:44:09 +0000 (0:00:00.416) 0:00:28.107 ******** 2025-06-11 14:44:18.926757 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9) 2025-06-11 14:44:18.926768 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9) 2025-06-11 14:44:18.926807 | orchestrator | 2025-06-11 14:44:18.926819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926830 | orchestrator | Wednesday 11 June 2025 14:44:10 +0000 (0:00:00.426) 0:00:28.534 ******** 2025-06-11 14:44:18.926841 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b) 2025-06-11 14:44:18.926852 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b) 2025-06-11 14:44:18.926862 | orchestrator | 2025-06-11 14:44:18.926873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:18.926884 | orchestrator | Wednesday 11 June 2025 14:44:10 +0000 (0:00:00.454) 0:00:28.988 ******** 2025-06-11 14:44:18.926895 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:44:18.926906 | orchestrator | 2025-06-11 14:44:18.926917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.926927 | orchestrator | Wednesday 11 June 2025 14:44:10 +0000 (0:00:00.332) 0:00:29.321 ******** 2025-06-11 14:44:18.926938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-11 14:44:18.926949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-11 14:44:18.926960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-11 14:44:18.926971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-11 14:44:18.926981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-11 14:44:18.926992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-11 14:44:18.927004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-11 14:44:18.927016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-11 14:44:18.927028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-11 14:44:18.927039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-11 14:44:18.927052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-11 14:44:18.927064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-11 14:44:18.927075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-11 14:44:18.927098 | orchestrator | 2025-06-11 14:44:18.927110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927123 | orchestrator | Wednesday 11 June 2025 14:44:11 +0000 (0:00:00.624) 0:00:29.946 ******** 2025-06-11 14:44:18.927135 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927148 | orchestrator | 2025-06-11 14:44:18.927160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927173 | orchestrator | Wednesday 11 June 2025 14:44:11 +0000 (0:00:00.221) 0:00:30.168 ******** 2025-06-11 14:44:18.927185 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927197 | orchestrator | 2025-06-11 14:44:18.927209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927221 | orchestrator | Wednesday 11 June 2025 14:44:11 +0000 (0:00:00.194) 0:00:30.362 ******** 2025-06-11 14:44:18.927233 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927245 | orchestrator | 2025-06-11 14:44:18.927258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927270 | orchestrator | Wednesday 11 June 2025 14:44:12 +0000 (0:00:00.227) 0:00:30.589 ******** 2025-06-11 14:44:18.927282 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927293 | orchestrator | 2025-06-11 14:44:18.927320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927332 | orchestrator | Wednesday 11 June 2025 14:44:12 +0000 (0:00:00.196) 0:00:30.786 ******** 2025-06-11 14:44:18.927343 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927354 | orchestrator | 2025-06-11 14:44:18.927366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927377 | orchestrator | Wednesday 11 June 2025 14:44:12 +0000 (0:00:00.228) 0:00:31.014 ******** 2025-06-11 14:44:18.927387 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927398 | orchestrator | 2025-06-11 14:44:18.927409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927420 | orchestrator | Wednesday 11 June 2025 14:44:12 +0000 (0:00:00.192) 0:00:31.207 ******** 2025-06-11 14:44:18.927431 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927442 | orchestrator | 2025-06-11 14:44:18.927453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927463 | orchestrator | Wednesday 11 June 2025 14:44:12 +0000 (0:00:00.198) 0:00:31.405 ******** 2025-06-11 14:44:18.927474 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927485 | orchestrator | 2025-06-11 14:44:18.927496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927507 | orchestrator | Wednesday 11 June 2025 14:44:13 +0000 (0:00:00.201) 0:00:31.607 ******** 2025-06-11 14:44:18.927518 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-11 14:44:18.927529 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-11 14:44:18.927540 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-11 14:44:18.927551 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-11 14:44:18.927562 | orchestrator | 2025-06-11 14:44:18.927573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927584 | orchestrator | Wednesday 11 June 2025 14:44:13 +0000 (0:00:00.809) 0:00:32.416 ******** 2025-06-11 14:44:18.927595 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927606 | orchestrator | 2025-06-11 14:44:18.927617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927627 | orchestrator | Wednesday 11 June 2025 14:44:14 +0000 (0:00:00.203) 0:00:32.619 ******** 2025-06-11 14:44:18.927638 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927649 | orchestrator | 2025-06-11 14:44:18.927660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927671 | orchestrator | Wednesday 11 June 2025 14:44:14 +0000 (0:00:00.203) 0:00:32.822 ******** 2025-06-11 14:44:18.927682 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927692 | orchestrator | 2025-06-11 14:44:18.927703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:18.927721 | orchestrator | Wednesday 11 June 2025 14:44:14 +0000 (0:00:00.648) 0:00:33.471 ******** 2025-06-11 14:44:18.927732 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927743 | orchestrator | 2025-06-11 14:44:18.927754 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-11 14:44:18.927765 | orchestrator | Wednesday 11 June 2025 14:44:15 +0000 (0:00:00.201) 0:00:33.672 ******** 2025-06-11 14:44:18.927821 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.927833 | orchestrator | 2025-06-11 14:44:18.927844 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-11 14:44:18.927855 | orchestrator | Wednesday 11 June 2025 14:44:15 +0000 (0:00:00.136) 0:00:33.809 ******** 2025-06-11 14:44:18.927866 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd502667e-47a1-548a-a5f2-2993142d2957'}}) 2025-06-11 14:44:18.927877 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '40a0a619-d38c-5879-89ae-a3eefd65fa41'}}) 2025-06-11 14:44:18.927887 | orchestrator | 2025-06-11 14:44:18.927898 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-11 14:44:18.927909 | orchestrator | Wednesday 11 June 2025 14:44:15 +0000 (0:00:00.191) 0:00:34.000 ******** 2025-06-11 14:44:18.927922 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'}) 2025-06-11 14:44:18.927934 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'}) 2025-06-11 14:44:18.927945 | orchestrator | 2025-06-11 14:44:18.927956 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-11 14:44:18.927966 | orchestrator | Wednesday 11 June 2025 14:44:17 +0000 (0:00:01.851) 0:00:35.851 ******** 2025-06-11 14:44:18.927977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:18.927989 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:18.928000 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:18.928011 | orchestrator | 2025-06-11 14:44:18.928022 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-11 14:44:18.928033 | orchestrator | Wednesday 11 June 2025 14:44:17 +0000 (0:00:00.161) 0:00:36.013 ******** 2025-06-11 14:44:18.928044 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'}) 2025-06-11 14:44:18.928055 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'}) 2025-06-11 14:44:18.928071 | orchestrator | 2025-06-11 14:44:18.928099 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-11 14:44:24.389040 | orchestrator | Wednesday 11 June 2025 14:44:18 +0000 (0:00:01.426) 0:00:37.440 ******** 2025-06-11 14:44:24.389154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389193 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389206 | orchestrator | 2025-06-11 14:44:24.389216 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-11 14:44:24.389227 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.161) 0:00:37.601 ******** 2025-06-11 14:44:24.389242 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389283 | orchestrator | 2025-06-11 14:44:24.389309 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-11 14:44:24.389319 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.135) 0:00:37.737 ******** 2025-06-11 14:44:24.389328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389352 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389360 | orchestrator | 2025-06-11 14:44:24.389374 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-11 14:44:24.389389 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.148) 0:00:37.885 ******** 2025-06-11 14:44:24.389403 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389419 | orchestrator | 2025-06-11 14:44:24.389434 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-11 14:44:24.389446 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.129) 0:00:38.015 ******** 2025-06-11 14:44:24.389455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389473 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389481 | orchestrator | 2025-06-11 14:44:24.389490 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-11 14:44:24.389498 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.145) 0:00:38.161 ******** 2025-06-11 14:44:24.389506 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389515 | orchestrator | 2025-06-11 14:44:24.389523 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-11 14:44:24.389532 | orchestrator | Wednesday 11 June 2025 14:44:19 +0000 (0:00:00.331) 0:00:38.492 ******** 2025-06-11 14:44:24.389540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389557 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389566 | orchestrator | 2025-06-11 14:44:24.389574 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-11 14:44:24.389582 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.140) 0:00:38.633 ******** 2025-06-11 14:44:24.389591 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:24.389600 | orchestrator | 2025-06-11 14:44:24.389609 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-11 14:44:24.389617 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.141) 0:00:38.774 ******** 2025-06-11 14:44:24.389626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389643 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389651 | orchestrator | 2025-06-11 14:44:24.389660 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-11 14:44:24.389668 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.178) 0:00:38.953 ******** 2025-06-11 14:44:24.389676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389703 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389711 | orchestrator | 2025-06-11 14:44:24.389720 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-11 14:44:24.389729 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.161) 0:00:39.115 ******** 2025-06-11 14:44:24.389758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:24.389796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:24.389812 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389824 | orchestrator | 2025-06-11 14:44:24.389833 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-11 14:44:24.389842 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.145) 0:00:39.260 ******** 2025-06-11 14:44:24.389853 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389868 | orchestrator | 2025-06-11 14:44:24.389882 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-11 14:44:24.389898 | orchestrator | Wednesday 11 June 2025 14:44:20 +0000 (0:00:00.147) 0:00:39.407 ******** 2025-06-11 14:44:24.389913 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389928 | orchestrator | 2025-06-11 14:44:24.389942 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-11 14:44:24.389951 | orchestrator | Wednesday 11 June 2025 14:44:21 +0000 (0:00:00.143) 0:00:39.551 ******** 2025-06-11 14:44:24.389959 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.389968 | orchestrator | 2025-06-11 14:44:24.389982 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-11 14:44:24.389991 | orchestrator | Wednesday 11 June 2025 14:44:21 +0000 (0:00:00.135) 0:00:39.686 ******** 2025-06-11 14:44:24.389999 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:44:24.390008 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-11 14:44:24.390069 | orchestrator | } 2025-06-11 14:44:24.390081 | orchestrator | 2025-06-11 14:44:24.390090 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-11 14:44:24.390099 | orchestrator | Wednesday 11 June 2025 14:44:21 +0000 (0:00:00.137) 0:00:39.824 ******** 2025-06-11 14:44:24.390107 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:44:24.390116 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-11 14:44:24.390124 | orchestrator | } 2025-06-11 14:44:24.390133 | orchestrator | 2025-06-11 14:44:24.390142 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-11 14:44:24.390150 | orchestrator | Wednesday 11 June 2025 14:44:21 +0000 (0:00:00.158) 0:00:39.982 ******** 2025-06-11 14:44:24.390159 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:44:24.390167 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-11 14:44:24.390176 | orchestrator | } 2025-06-11 14:44:24.390185 | orchestrator | 2025-06-11 14:44:24.390193 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-11 14:44:24.390202 | orchestrator | Wednesday 11 June 2025 14:44:21 +0000 (0:00:00.134) 0:00:40.117 ******** 2025-06-11 14:44:24.390210 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:24.390219 | orchestrator | 2025-06-11 14:44:24.390227 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-11 14:44:24.390236 | orchestrator | Wednesday 11 June 2025 14:44:22 +0000 (0:00:00.731) 0:00:40.849 ******** 2025-06-11 14:44:24.390244 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:24.390253 | orchestrator | 2025-06-11 14:44:24.390262 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-11 14:44:24.390278 | orchestrator | Wednesday 11 June 2025 14:44:22 +0000 (0:00:00.521) 0:00:41.370 ******** 2025-06-11 14:44:24.390287 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:24.390296 | orchestrator | 2025-06-11 14:44:24.390304 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-11 14:44:24.390313 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.506) 0:00:41.877 ******** 2025-06-11 14:44:24.390321 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:24.390330 | orchestrator | 2025-06-11 14:44:24.390339 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-11 14:44:24.390347 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.141) 0:00:42.018 ******** 2025-06-11 14:44:24.390356 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390364 | orchestrator | 2025-06-11 14:44:24.390373 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-11 14:44:24.390381 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.119) 0:00:42.138 ******** 2025-06-11 14:44:24.390390 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390398 | orchestrator | 2025-06-11 14:44:24.390414 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-11 14:44:24.390425 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.094) 0:00:42.232 ******** 2025-06-11 14:44:24.390433 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:44:24.390442 | orchestrator |  "vgs_report": { 2025-06-11 14:44:24.390452 | orchestrator |  "vg": [] 2025-06-11 14:44:24.390465 | orchestrator |  } 2025-06-11 14:44:24.390480 | orchestrator | } 2025-06-11 14:44:24.390495 | orchestrator | 2025-06-11 14:44:24.390510 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-11 14:44:24.390526 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.144) 0:00:42.377 ******** 2025-06-11 14:44:24.390540 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390556 | orchestrator | 2025-06-11 14:44:24.390565 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-11 14:44:24.390574 | orchestrator | Wednesday 11 June 2025 14:44:23 +0000 (0:00:00.135) 0:00:42.513 ******** 2025-06-11 14:44:24.390582 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390591 | orchestrator | 2025-06-11 14:44:24.390599 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-11 14:44:24.390608 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.125) 0:00:42.638 ******** 2025-06-11 14:44:24.390616 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390625 | orchestrator | 2025-06-11 14:44:24.390633 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-11 14:44:24.390642 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.125) 0:00:42.764 ******** 2025-06-11 14:44:24.390650 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:24.390659 | orchestrator | 2025-06-11 14:44:24.390668 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-11 14:44:24.390685 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.141) 0:00:42.905 ******** 2025-06-11 14:44:28.930950 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.931110 | orchestrator | 2025-06-11 14:44:28.931131 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-11 14:44:28.931144 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.140) 0:00:43.046 ******** 2025-06-11 14:44:28.932084 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932118 | orchestrator | 2025-06-11 14:44:28.932137 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-11 14:44:28.932155 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.306) 0:00:43.353 ******** 2025-06-11 14:44:28.932172 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932189 | orchestrator | 2025-06-11 14:44:28.932206 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-11 14:44:28.932224 | orchestrator | Wednesday 11 June 2025 14:44:24 +0000 (0:00:00.134) 0:00:43.488 ******** 2025-06-11 14:44:28.932285 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932317 | orchestrator | 2025-06-11 14:44:28.932339 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-11 14:44:28.932357 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.135) 0:00:43.623 ******** 2025-06-11 14:44:28.932377 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932396 | orchestrator | 2025-06-11 14:44:28.932413 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-11 14:44:28.932450 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.131) 0:00:43.755 ******** 2025-06-11 14:44:28.932462 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932473 | orchestrator | 2025-06-11 14:44:28.932483 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-11 14:44:28.932494 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.133) 0:00:43.888 ******** 2025-06-11 14:44:28.932505 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932515 | orchestrator | 2025-06-11 14:44:28.932526 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-11 14:44:28.932536 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.137) 0:00:44.026 ******** 2025-06-11 14:44:28.932547 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932557 | orchestrator | 2025-06-11 14:44:28.932568 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-11 14:44:28.932578 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.128) 0:00:44.154 ******** 2025-06-11 14:44:28.932589 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932600 | orchestrator | 2025-06-11 14:44:28.932610 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-11 14:44:28.932621 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.134) 0:00:44.289 ******** 2025-06-11 14:44:28.932631 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932642 | orchestrator | 2025-06-11 14:44:28.932652 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-11 14:44:28.932663 | orchestrator | Wednesday 11 June 2025 14:44:25 +0000 (0:00:00.128) 0:00:44.417 ******** 2025-06-11 14:44:28.932676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.932688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.932699 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932710 | orchestrator | 2025-06-11 14:44:28.932721 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-11 14:44:28.932731 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.154) 0:00:44.571 ******** 2025-06-11 14:44:28.932742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.932753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.932763 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932812 | orchestrator | 2025-06-11 14:44:28.932824 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-11 14:44:28.932835 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.142) 0:00:44.713 ******** 2025-06-11 14:44:28.932846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.932857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.932868 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932890 | orchestrator | 2025-06-11 14:44:28.932901 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-11 14:44:28.932912 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.160) 0:00:44.874 ******** 2025-06-11 14:44:28.932923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.932934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.932944 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.932955 | orchestrator | 2025-06-11 14:44:28.932966 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-11 14:44:28.933001 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.350) 0:00:45.224 ******** 2025-06-11 14:44:28.933013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933034 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.933045 | orchestrator | 2025-06-11 14:44:28.933056 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-11 14:44:28.933066 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.155) 0:00:45.380 ******** 2025-06-11 14:44:28.933077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933098 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.933109 | orchestrator | 2025-06-11 14:44:28.933120 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-11 14:44:28.933131 | orchestrator | Wednesday 11 June 2025 14:44:26 +0000 (0:00:00.146) 0:00:45.526 ******** 2025-06-11 14:44:28.933142 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933163 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.933174 | orchestrator | 2025-06-11 14:44:28.933185 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-11 14:44:28.933195 | orchestrator | Wednesday 11 June 2025 14:44:27 +0000 (0:00:00.152) 0:00:45.678 ******** 2025-06-11 14:44:28.933207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933229 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.933240 | orchestrator | 2025-06-11 14:44:28.933250 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-11 14:44:28.933261 | orchestrator | Wednesday 11 June 2025 14:44:27 +0000 (0:00:00.156) 0:00:45.835 ******** 2025-06-11 14:44:28.933272 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:28.933283 | orchestrator | 2025-06-11 14:44:28.933294 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-11 14:44:28.933304 | orchestrator | Wednesday 11 June 2025 14:44:27 +0000 (0:00:00.481) 0:00:46.317 ******** 2025-06-11 14:44:28.933322 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:28.933333 | orchestrator | 2025-06-11 14:44:28.933343 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-11 14:44:28.933354 | orchestrator | Wednesday 11 June 2025 14:44:28 +0000 (0:00:00.501) 0:00:46.818 ******** 2025-06-11 14:44:28.933364 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:44:28.933375 | orchestrator | 2025-06-11 14:44:28.933386 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-11 14:44:28.933396 | orchestrator | Wednesday 11 June 2025 14:44:28 +0000 (0:00:00.152) 0:00:46.970 ******** 2025-06-11 14:44:28.933445 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'vg_name': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'}) 2025-06-11 14:44:28.933458 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'vg_name': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'}) 2025-06-11 14:44:28.933474 | orchestrator | 2025-06-11 14:44:28.933485 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-11 14:44:28.933496 | orchestrator | Wednesday 11 June 2025 14:44:28 +0000 (0:00:00.166) 0:00:47.136 ******** 2025-06-11 14:44:28.933506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933528 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:28.933539 | orchestrator | 2025-06-11 14:44:28.933549 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-11 14:44:28.933560 | orchestrator | Wednesday 11 June 2025 14:44:28 +0000 (0:00:00.146) 0:00:47.283 ******** 2025-06-11 14:44:28.933571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:28.933582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:28.933600 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:34.805921 | orchestrator | 2025-06-11 14:44:34.806099 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-11 14:44:34.806120 | orchestrator | Wednesday 11 June 2025 14:44:28 +0000 (0:00:00.164) 0:00:47.448 ******** 2025-06-11 14:44:34.806133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'})  2025-06-11 14:44:34.806146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'})  2025-06-11 14:44:34.806158 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:44:34.806169 | orchestrator | 2025-06-11 14:44:34.806181 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-11 14:44:34.806192 | orchestrator | Wednesday 11 June 2025 14:44:29 +0000 (0:00:00.154) 0:00:47.603 ******** 2025-06-11 14:44:34.806203 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 14:44:34.806214 | orchestrator |  "lvm_report": { 2025-06-11 14:44:34.806226 | orchestrator |  "lv": [ 2025-06-11 14:44:34.806236 | orchestrator |  { 2025-06-11 14:44:34.806248 | orchestrator |  "lv_name": "osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41", 2025-06-11 14:44:34.806275 | orchestrator |  "vg_name": "ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41" 2025-06-11 14:44:34.806286 | orchestrator |  }, 2025-06-11 14:44:34.806297 | orchestrator |  { 2025-06-11 14:44:34.806308 | orchestrator |  "lv_name": "osd-block-d502667e-47a1-548a-a5f2-2993142d2957", 2025-06-11 14:44:34.806319 | orchestrator |  "vg_name": "ceph-d502667e-47a1-548a-a5f2-2993142d2957" 2025-06-11 14:44:34.806348 | orchestrator |  } 2025-06-11 14:44:34.806360 | orchestrator |  ], 2025-06-11 14:44:34.806370 | orchestrator |  "pv": [ 2025-06-11 14:44:34.806381 | orchestrator |  { 2025-06-11 14:44:34.806392 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-11 14:44:34.806403 | orchestrator |  "vg_name": "ceph-d502667e-47a1-548a-a5f2-2993142d2957" 2025-06-11 14:44:34.806415 | orchestrator |  }, 2025-06-11 14:44:34.806427 | orchestrator |  { 2025-06-11 14:44:34.806439 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-11 14:44:34.806451 | orchestrator |  "vg_name": "ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41" 2025-06-11 14:44:34.806462 | orchestrator |  } 2025-06-11 14:44:34.806474 | orchestrator |  ] 2025-06-11 14:44:34.806486 | orchestrator |  } 2025-06-11 14:44:34.806498 | orchestrator | } 2025-06-11 14:44:34.806510 | orchestrator | 2025-06-11 14:44:34.806522 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-11 14:44:34.806534 | orchestrator | 2025-06-11 14:44:34.806547 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 14:44:34.806560 | orchestrator | Wednesday 11 June 2025 14:44:29 +0000 (0:00:00.457) 0:00:48.060 ******** 2025-06-11 14:44:34.806572 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-11 14:44:34.806585 | orchestrator | 2025-06-11 14:44:34.806597 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-11 14:44:34.806609 | orchestrator | Wednesday 11 June 2025 14:44:29 +0000 (0:00:00.246) 0:00:48.307 ******** 2025-06-11 14:44:34.806622 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:34.806634 | orchestrator | 2025-06-11 14:44:34.806646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.806658 | orchestrator | Wednesday 11 June 2025 14:44:30 +0000 (0:00:00.233) 0:00:48.540 ******** 2025-06-11 14:44:34.806670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-11 14:44:34.806682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-11 14:44:34.806694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-11 14:44:34.806706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-11 14:44:34.806718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-11 14:44:34.806730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-11 14:44:34.806742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-11 14:44:34.806754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-11 14:44:34.806765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-11 14:44:34.806799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-11 14:44:34.806810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-11 14:44:34.806820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-11 14:44:34.806831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-11 14:44:34.806841 | orchestrator | 2025-06-11 14:44:34.806852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.806862 | orchestrator | Wednesday 11 June 2025 14:44:30 +0000 (0:00:00.394) 0:00:48.935 ******** 2025-06-11 14:44:34.806873 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.806884 | orchestrator | 2025-06-11 14:44:34.806894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.806905 | orchestrator | Wednesday 11 June 2025 14:44:30 +0000 (0:00:00.200) 0:00:49.135 ******** 2025-06-11 14:44:34.806924 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.806935 | orchestrator | 2025-06-11 14:44:34.806946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.806975 | orchestrator | Wednesday 11 June 2025 14:44:30 +0000 (0:00:00.221) 0:00:49.357 ******** 2025-06-11 14:44:34.806986 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.806997 | orchestrator | 2025-06-11 14:44:34.807007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807018 | orchestrator | Wednesday 11 June 2025 14:44:31 +0000 (0:00:00.196) 0:00:49.553 ******** 2025-06-11 14:44:34.807028 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.807039 | orchestrator | 2025-06-11 14:44:34.807049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807060 | orchestrator | Wednesday 11 June 2025 14:44:31 +0000 (0:00:00.192) 0:00:49.745 ******** 2025-06-11 14:44:34.807071 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.807081 | orchestrator | 2025-06-11 14:44:34.807092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807102 | orchestrator | Wednesday 11 June 2025 14:44:31 +0000 (0:00:00.199) 0:00:49.945 ******** 2025-06-11 14:44:34.807113 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.807123 | orchestrator | 2025-06-11 14:44:34.807134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807144 | orchestrator | Wednesday 11 June 2025 14:44:32 +0000 (0:00:00.586) 0:00:50.531 ******** 2025-06-11 14:44:34.807155 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.807165 | orchestrator | 2025-06-11 14:44:34.807181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807192 | orchestrator | Wednesday 11 June 2025 14:44:32 +0000 (0:00:00.197) 0:00:50.728 ******** 2025-06-11 14:44:34.807203 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:34.807213 | orchestrator | 2025-06-11 14:44:34.807223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807234 | orchestrator | Wednesday 11 June 2025 14:44:32 +0000 (0:00:00.199) 0:00:50.928 ******** 2025-06-11 14:44:34.807244 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9) 2025-06-11 14:44:34.807256 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9) 2025-06-11 14:44:34.807267 | orchestrator | 2025-06-11 14:44:34.807278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807288 | orchestrator | Wednesday 11 June 2025 14:44:32 +0000 (0:00:00.425) 0:00:51.353 ******** 2025-06-11 14:44:34.807299 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b) 2025-06-11 14:44:34.807309 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b) 2025-06-11 14:44:34.807320 | orchestrator | 2025-06-11 14:44:34.807330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807341 | orchestrator | Wednesday 11 June 2025 14:44:33 +0000 (0:00:00.401) 0:00:51.754 ******** 2025-06-11 14:44:34.807351 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961) 2025-06-11 14:44:34.807362 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961) 2025-06-11 14:44:34.807373 | orchestrator | 2025-06-11 14:44:34.807384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807395 | orchestrator | Wednesday 11 June 2025 14:44:33 +0000 (0:00:00.428) 0:00:52.183 ******** 2025-06-11 14:44:34.807405 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86) 2025-06-11 14:44:34.807416 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86) 2025-06-11 14:44:34.807427 | orchestrator | 2025-06-11 14:44:34.807444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-11 14:44:34.807455 | orchestrator | Wednesday 11 June 2025 14:44:34 +0000 (0:00:00.422) 0:00:52.605 ******** 2025-06-11 14:44:34.807465 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-11 14:44:34.807476 | orchestrator | 2025-06-11 14:44:34.807487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:34.807497 | orchestrator | Wednesday 11 June 2025 14:44:34 +0000 (0:00:00.316) 0:00:52.922 ******** 2025-06-11 14:44:34.807508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-11 14:44:34.807519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-11 14:44:34.807529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-11 14:44:34.807540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-11 14:44:34.807551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-11 14:44:34.807561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-11 14:44:34.807572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-11 14:44:34.807582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-11 14:44:34.807593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-11 14:44:34.807604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-11 14:44:34.807614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-11 14:44:34.807631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-11 14:44:43.599388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-11 14:44:43.599499 | orchestrator | 2025-06-11 14:44:43.599515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599527 | orchestrator | Wednesday 11 June 2025 14:44:34 +0000 (0:00:00.395) 0:00:53.317 ******** 2025-06-11 14:44:43.599538 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599550 | orchestrator | 2025-06-11 14:44:43.599562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599573 | orchestrator | Wednesday 11 June 2025 14:44:34 +0000 (0:00:00.193) 0:00:53.511 ******** 2025-06-11 14:44:43.599584 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599595 | orchestrator | 2025-06-11 14:44:43.599606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599616 | orchestrator | Wednesday 11 June 2025 14:44:35 +0000 (0:00:00.186) 0:00:53.697 ******** 2025-06-11 14:44:43.599627 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599638 | orchestrator | 2025-06-11 14:44:43.599649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599659 | orchestrator | Wednesday 11 June 2025 14:44:35 +0000 (0:00:00.603) 0:00:54.300 ******** 2025-06-11 14:44:43.599670 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599681 | orchestrator | 2025-06-11 14:44:43.599709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599720 | orchestrator | Wednesday 11 June 2025 14:44:35 +0000 (0:00:00.197) 0:00:54.498 ******** 2025-06-11 14:44:43.599731 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599742 | orchestrator | 2025-06-11 14:44:43.599753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599763 | orchestrator | Wednesday 11 June 2025 14:44:36 +0000 (0:00:00.193) 0:00:54.691 ******** 2025-06-11 14:44:43.599821 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599852 | orchestrator | 2025-06-11 14:44:43.599863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599874 | orchestrator | Wednesday 11 June 2025 14:44:36 +0000 (0:00:00.206) 0:00:54.898 ******** 2025-06-11 14:44:43.599884 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599895 | orchestrator | 2025-06-11 14:44:43.599906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599918 | orchestrator | Wednesday 11 June 2025 14:44:36 +0000 (0:00:00.193) 0:00:55.091 ******** 2025-06-11 14:44:43.599930 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.599942 | orchestrator | 2025-06-11 14:44:43.599954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.599966 | orchestrator | Wednesday 11 June 2025 14:44:36 +0000 (0:00:00.192) 0:00:55.284 ******** 2025-06-11 14:44:43.599978 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-11 14:44:43.599991 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-11 14:44:43.600003 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-11 14:44:43.600015 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-11 14:44:43.600027 | orchestrator | 2025-06-11 14:44:43.600038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.600050 | orchestrator | Wednesday 11 June 2025 14:44:37 +0000 (0:00:00.629) 0:00:55.913 ******** 2025-06-11 14:44:43.600062 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600074 | orchestrator | 2025-06-11 14:44:43.600086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.600098 | orchestrator | Wednesday 11 June 2025 14:44:37 +0000 (0:00:00.210) 0:00:56.124 ******** 2025-06-11 14:44:43.600110 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600121 | orchestrator | 2025-06-11 14:44:43.600133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.600145 | orchestrator | Wednesday 11 June 2025 14:44:37 +0000 (0:00:00.199) 0:00:56.324 ******** 2025-06-11 14:44:43.600157 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600169 | orchestrator | 2025-06-11 14:44:43.600181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-11 14:44:43.600193 | orchestrator | Wednesday 11 June 2025 14:44:37 +0000 (0:00:00.198) 0:00:56.523 ******** 2025-06-11 14:44:43.600204 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600216 | orchestrator | 2025-06-11 14:44:43.600227 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-11 14:44:43.600240 | orchestrator | Wednesday 11 June 2025 14:44:38 +0000 (0:00:00.194) 0:00:56.717 ******** 2025-06-11 14:44:43.600251 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600263 | orchestrator | 2025-06-11 14:44:43.600274 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-11 14:44:43.600285 | orchestrator | Wednesday 11 June 2025 14:44:38 +0000 (0:00:00.135) 0:00:56.852 ******** 2025-06-11 14:44:43.600295 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af7ee71e-f6e2-506a-9b19-157b61fbf28d'}}) 2025-06-11 14:44:43.600306 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee9e3135-eac7-54c9-a7bd-c984355157b1'}}) 2025-06-11 14:44:43.600317 | orchestrator | 2025-06-11 14:44:43.600328 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-11 14:44:43.600338 | orchestrator | Wednesday 11 June 2025 14:44:38 +0000 (0:00:00.380) 0:00:57.233 ******** 2025-06-11 14:44:43.600350 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'}) 2025-06-11 14:44:43.600362 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'}) 2025-06-11 14:44:43.600373 | orchestrator | 2025-06-11 14:44:43.600384 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-11 14:44:43.600423 | orchestrator | Wednesday 11 June 2025 14:44:40 +0000 (0:00:01.867) 0:00:59.101 ******** 2025-06-11 14:44:43.600435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:43.600446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:43.600457 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600468 | orchestrator | 2025-06-11 14:44:43.600478 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-11 14:44:43.600489 | orchestrator | Wednesday 11 June 2025 14:44:40 +0000 (0:00:00.159) 0:00:59.260 ******** 2025-06-11 14:44:43.600499 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'}) 2025-06-11 14:44:43.600510 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'}) 2025-06-11 14:44:43.600521 | orchestrator | 2025-06-11 14:44:43.600532 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-11 14:44:43.600542 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:01.364) 0:01:00.624 ******** 2025-06-11 14:44:43.600553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:43.600563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:43.600574 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600585 | orchestrator | 2025-06-11 14:44:43.600595 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-11 14:44:43.600606 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.162) 0:01:00.787 ******** 2025-06-11 14:44:43.600616 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600627 | orchestrator | 2025-06-11 14:44:43.600638 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-11 14:44:43.600648 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.138) 0:01:00.925 ******** 2025-06-11 14:44:43.600659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:43.600670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:43.600681 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600691 | orchestrator | 2025-06-11 14:44:43.600702 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-11 14:44:43.600712 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.152) 0:01:01.077 ******** 2025-06-11 14:44:43.600723 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600733 | orchestrator | 2025-06-11 14:44:43.600744 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-11 14:44:43.600754 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.135) 0:01:01.213 ******** 2025-06-11 14:44:43.600765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:43.600818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:43.600830 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600840 | orchestrator | 2025-06-11 14:44:43.600851 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-11 14:44:43.600869 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.159) 0:01:01.373 ******** 2025-06-11 14:44:43.600879 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600890 | orchestrator | 2025-06-11 14:44:43.600901 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-11 14:44:43.600911 | orchestrator | Wednesday 11 June 2025 14:44:42 +0000 (0:00:00.129) 0:01:01.502 ******** 2025-06-11 14:44:43.600922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:43.600932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:43.600943 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:43.600954 | orchestrator | 2025-06-11 14:44:43.600964 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-11 14:44:43.600975 | orchestrator | Wednesday 11 June 2025 14:44:43 +0000 (0:00:00.145) 0:01:01.648 ******** 2025-06-11 14:44:43.600985 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:43.600996 | orchestrator | 2025-06-11 14:44:43.601006 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-11 14:44:43.601017 | orchestrator | Wednesday 11 June 2025 14:44:43 +0000 (0:00:00.134) 0:01:01.783 ******** 2025-06-11 14:44:43.601035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:49.591551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:49.591691 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.591708 | orchestrator | 2025-06-11 14:44:49.591720 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-11 14:44:49.591733 | orchestrator | Wednesday 11 June 2025 14:44:43 +0000 (0:00:00.336) 0:01:02.119 ******** 2025-06-11 14:44:49.591743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:49.591753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:49.591763 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.591832 | orchestrator | 2025-06-11 14:44:49.591843 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-11 14:44:49.591874 | orchestrator | Wednesday 11 June 2025 14:44:43 +0000 (0:00:00.146) 0:01:02.266 ******** 2025-06-11 14:44:49.591885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:49.591895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:49.591905 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.591914 | orchestrator | 2025-06-11 14:44:49.591924 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-11 14:44:49.591934 | orchestrator | Wednesday 11 June 2025 14:44:43 +0000 (0:00:00.154) 0:01:02.420 ******** 2025-06-11 14:44:49.591944 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.591954 | orchestrator | 2025-06-11 14:44:49.591964 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-11 14:44:49.591974 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.140) 0:01:02.561 ******** 2025-06-11 14:44:49.591984 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.591993 | orchestrator | 2025-06-11 14:44:49.592003 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-11 14:44:49.592042 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.137) 0:01:02.699 ******** 2025-06-11 14:44:49.592054 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592064 | orchestrator | 2025-06-11 14:44:49.592075 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-11 14:44:49.592086 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.139) 0:01:02.839 ******** 2025-06-11 14:44:49.592097 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:44:49.592108 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-11 14:44:49.592120 | orchestrator | } 2025-06-11 14:44:49.592131 | orchestrator | 2025-06-11 14:44:49.592142 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-11 14:44:49.592153 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.143) 0:01:02.982 ******** 2025-06-11 14:44:49.592164 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:44:49.592175 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-11 14:44:49.592186 | orchestrator | } 2025-06-11 14:44:49.592197 | orchestrator | 2025-06-11 14:44:49.592207 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-11 14:44:49.592218 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.140) 0:01:03.122 ******** 2025-06-11 14:44:49.592228 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:44:49.592239 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-11 14:44:49.592250 | orchestrator | } 2025-06-11 14:44:49.592261 | orchestrator | 2025-06-11 14:44:49.592271 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-11 14:44:49.592282 | orchestrator | Wednesday 11 June 2025 14:44:44 +0000 (0:00:00.133) 0:01:03.256 ******** 2025-06-11 14:44:49.592293 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:49.592304 | orchestrator | 2025-06-11 14:44:49.592315 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-11 14:44:49.592326 | orchestrator | Wednesday 11 June 2025 14:44:45 +0000 (0:00:00.519) 0:01:03.775 ******** 2025-06-11 14:44:49.592338 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:49.592349 | orchestrator | 2025-06-11 14:44:49.592360 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-11 14:44:49.592370 | orchestrator | Wednesday 11 June 2025 14:44:45 +0000 (0:00:00.519) 0:01:04.295 ******** 2025-06-11 14:44:49.592381 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:49.592392 | orchestrator | 2025-06-11 14:44:49.592402 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-11 14:44:49.592413 | orchestrator | Wednesday 11 June 2025 14:44:46 +0000 (0:00:00.543) 0:01:04.839 ******** 2025-06-11 14:44:49.592424 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:49.592435 | orchestrator | 2025-06-11 14:44:49.592446 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-11 14:44:49.592456 | orchestrator | Wednesday 11 June 2025 14:44:46 +0000 (0:00:00.330) 0:01:05.169 ******** 2025-06-11 14:44:49.592466 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592476 | orchestrator | 2025-06-11 14:44:49.592485 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-11 14:44:49.592495 | orchestrator | Wednesday 11 June 2025 14:44:46 +0000 (0:00:00.111) 0:01:05.280 ******** 2025-06-11 14:44:49.592504 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592514 | orchestrator | 2025-06-11 14:44:49.592524 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-11 14:44:49.592533 | orchestrator | Wednesday 11 June 2025 14:44:46 +0000 (0:00:00.112) 0:01:05.393 ******** 2025-06-11 14:44:49.592543 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:44:49.592553 | orchestrator |  "vgs_report": { 2025-06-11 14:44:49.592562 | orchestrator |  "vg": [] 2025-06-11 14:44:49.592592 | orchestrator |  } 2025-06-11 14:44:49.592602 | orchestrator | } 2025-06-11 14:44:49.592612 | orchestrator | 2025-06-11 14:44:49.592621 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-11 14:44:49.592631 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.141) 0:01:05.535 ******** 2025-06-11 14:44:49.592648 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592658 | orchestrator | 2025-06-11 14:44:49.592668 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-11 14:44:49.592677 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.137) 0:01:05.672 ******** 2025-06-11 14:44:49.592686 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592696 | orchestrator | 2025-06-11 14:44:49.592705 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-11 14:44:49.592715 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.131) 0:01:05.804 ******** 2025-06-11 14:44:49.592724 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592733 | orchestrator | 2025-06-11 14:44:49.592743 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-11 14:44:49.592752 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.138) 0:01:05.942 ******** 2025-06-11 14:44:49.592762 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592799 | orchestrator | 2025-06-11 14:44:49.592823 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-11 14:44:49.592840 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.135) 0:01:06.077 ******** 2025-06-11 14:44:49.592854 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592869 | orchestrator | 2025-06-11 14:44:49.592885 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-11 14:44:49.592900 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.137) 0:01:06.215 ******** 2025-06-11 14:44:49.592914 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592928 | orchestrator | 2025-06-11 14:44:49.592944 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-11 14:44:49.592958 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.131) 0:01:06.347 ******** 2025-06-11 14:44:49.592974 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.592990 | orchestrator | 2025-06-11 14:44:49.593005 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-11 14:44:49.593020 | orchestrator | Wednesday 11 June 2025 14:44:47 +0000 (0:00:00.140) 0:01:06.487 ******** 2025-06-11 14:44:49.593036 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593053 | orchestrator | 2025-06-11 14:44:49.593070 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-11 14:44:49.593087 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.125) 0:01:06.613 ******** 2025-06-11 14:44:49.593098 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593108 | orchestrator | 2025-06-11 14:44:49.593118 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-11 14:44:49.593127 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.345) 0:01:06.959 ******** 2025-06-11 14:44:49.593137 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593146 | orchestrator | 2025-06-11 14:44:49.593155 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-11 14:44:49.593165 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.144) 0:01:07.103 ******** 2025-06-11 14:44:49.593174 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593184 | orchestrator | 2025-06-11 14:44:49.593193 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-11 14:44:49.593203 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.134) 0:01:07.237 ******** 2025-06-11 14:44:49.593212 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593222 | orchestrator | 2025-06-11 14:44:49.593231 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-11 14:44:49.593240 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.135) 0:01:07.372 ******** 2025-06-11 14:44:49.593250 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593259 | orchestrator | 2025-06-11 14:44:49.593269 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-11 14:44:49.593278 | orchestrator | Wednesday 11 June 2025 14:44:48 +0000 (0:00:00.130) 0:01:07.502 ******** 2025-06-11 14:44:49.593297 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593306 | orchestrator | 2025-06-11 14:44:49.593316 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-11 14:44:49.593325 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.134) 0:01:07.637 ******** 2025-06-11 14:44:49.593335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:49.593345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:49.593355 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593364 | orchestrator | 2025-06-11 14:44:49.593373 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-11 14:44:49.593383 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.167) 0:01:07.805 ******** 2025-06-11 14:44:49.593392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:49.593402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:49.593411 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:49.593421 | orchestrator | 2025-06-11 14:44:49.593430 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-11 14:44:49.593440 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.156) 0:01:07.962 ******** 2025-06-11 14:44:49.593459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541154 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541167 | orchestrator | 2025-06-11 14:44:52.541177 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-11 14:44:52.541186 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.149) 0:01:08.111 ******** 2025-06-11 14:44:52.541194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541209 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541216 | orchestrator | 2025-06-11 14:44:52.541245 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-11 14:44:52.541253 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.157) 0:01:08.268 ******** 2025-06-11 14:44:52.541260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541274 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541281 | orchestrator | 2025-06-11 14:44:52.541288 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-11 14:44:52.541296 | orchestrator | Wednesday 11 June 2025 14:44:49 +0000 (0:00:00.147) 0:01:08.416 ******** 2025-06-11 14:44:52.541303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541340 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541347 | orchestrator | 2025-06-11 14:44:52.541354 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-11 14:44:52.541361 | orchestrator | Wednesday 11 June 2025 14:44:50 +0000 (0:00:00.156) 0:01:08.572 ******** 2025-06-11 14:44:52.541368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541382 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541389 | orchestrator | 2025-06-11 14:44:52.541396 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-11 14:44:52.541403 | orchestrator | Wednesday 11 June 2025 14:44:50 +0000 (0:00:00.349) 0:01:08.922 ******** 2025-06-11 14:44:52.541410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541425 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541432 | orchestrator | 2025-06-11 14:44:52.541438 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-11 14:44:52.541445 | orchestrator | Wednesday 11 June 2025 14:44:50 +0000 (0:00:00.145) 0:01:09.067 ******** 2025-06-11 14:44:52.541452 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:52.541461 | orchestrator | 2025-06-11 14:44:52.541468 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-11 14:44:52.541475 | orchestrator | Wednesday 11 June 2025 14:44:51 +0000 (0:00:00.522) 0:01:09.590 ******** 2025-06-11 14:44:52.541482 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:52.541489 | orchestrator | 2025-06-11 14:44:52.541496 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-11 14:44:52.541502 | orchestrator | Wednesday 11 June 2025 14:44:51 +0000 (0:00:00.523) 0:01:10.114 ******** 2025-06-11 14:44:52.541509 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:44:52.541516 | orchestrator | 2025-06-11 14:44:52.541523 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-11 14:44:52.541529 | orchestrator | Wednesday 11 June 2025 14:44:51 +0000 (0:00:00.153) 0:01:10.267 ******** 2025-06-11 14:44:52.541536 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'vg_name': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'}) 2025-06-11 14:44:52.541545 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'vg_name': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'}) 2025-06-11 14:44:52.541552 | orchestrator | 2025-06-11 14:44:52.541560 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-11 14:44:52.541567 | orchestrator | Wednesday 11 June 2025 14:44:51 +0000 (0:00:00.162) 0:01:10.429 ******** 2025-06-11 14:44:52.541595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541612 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541619 | orchestrator | 2025-06-11 14:44:52.541626 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-11 14:44:52.541634 | orchestrator | Wednesday 11 June 2025 14:44:52 +0000 (0:00:00.157) 0:01:10.587 ******** 2025-06-11 14:44:52.541650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541666 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541673 | orchestrator | 2025-06-11 14:44:52.541680 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-11 14:44:52.541688 | orchestrator | Wednesday 11 June 2025 14:44:52 +0000 (0:00:00.156) 0:01:10.743 ******** 2025-06-11 14:44:52.541696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'})  2025-06-11 14:44:52.541703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'})  2025-06-11 14:44:52.541709 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:44:52.541715 | orchestrator | 2025-06-11 14:44:52.541721 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-11 14:44:52.541727 | orchestrator | Wednesday 11 June 2025 14:44:52 +0000 (0:00:00.151) 0:01:10.895 ******** 2025-06-11 14:44:52.541734 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 14:44:52.541759 | orchestrator |  "lvm_report": { 2025-06-11 14:44:52.541781 | orchestrator |  "lv": [ 2025-06-11 14:44:52.541797 | orchestrator |  { 2025-06-11 14:44:52.541805 | orchestrator |  "lv_name": "osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d", 2025-06-11 14:44:52.541815 | orchestrator |  "vg_name": "ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d" 2025-06-11 14:44:52.541822 | orchestrator |  }, 2025-06-11 14:44:52.541830 | orchestrator |  { 2025-06-11 14:44:52.541838 | orchestrator |  "lv_name": "osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1", 2025-06-11 14:44:52.541847 | orchestrator |  "vg_name": "ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1" 2025-06-11 14:44:52.541854 | orchestrator |  } 2025-06-11 14:44:52.541863 | orchestrator |  ], 2025-06-11 14:44:52.541870 | orchestrator |  "pv": [ 2025-06-11 14:44:52.541878 | orchestrator |  { 2025-06-11 14:44:52.541886 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-11 14:44:52.541895 | orchestrator |  "vg_name": "ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d" 2025-06-11 14:44:52.541902 | orchestrator |  }, 2025-06-11 14:44:52.541909 | orchestrator |  { 2025-06-11 14:44:52.541917 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-11 14:44:52.541925 | orchestrator |  "vg_name": "ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1" 2025-06-11 14:44:52.541932 | orchestrator |  } 2025-06-11 14:44:52.541940 | orchestrator |  ] 2025-06-11 14:44:52.541947 | orchestrator |  } 2025-06-11 14:44:52.541955 | orchestrator | } 2025-06-11 14:44:52.541963 | orchestrator | 2025-06-11 14:44:52.541971 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:44:52.541979 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-11 14:44:52.541987 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-11 14:44:52.541995 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-11 14:44:52.542003 | orchestrator | 2025-06-11 14:44:52.542010 | orchestrator | 2025-06-11 14:44:52.542078 | orchestrator | 2025-06-11 14:44:52.542086 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:44:52.542094 | orchestrator | Wednesday 11 June 2025 14:44:52 +0000 (0:00:00.145) 0:01:11.040 ******** 2025-06-11 14:44:52.542108 | orchestrator | =============================================================================== 2025-06-11 14:44:52.542115 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2025-06-11 14:44:52.542123 | orchestrator | Create block LVs -------------------------------------------------------- 4.23s 2025-06-11 14:44:52.542130 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.92s 2025-06-11 14:44:52.542137 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2025-06-11 14:44:52.542143 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-06-11 14:44:52.542150 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2025-06-11 14:44:52.542157 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-06-11 14:44:52.542164 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-06-11 14:44:52.542179 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2025-06-11 14:44:52.917405 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2025-06-11 14:44:52.917525 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2025-06-11 14:44:52.917535 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-06-11 14:44:52.917543 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.76s 2025-06-11 14:44:52.917551 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-06-11 14:44:52.917558 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-06-11 14:44:52.917591 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-06-11 14:44:52.917600 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.69s 2025-06-11 14:44:52.917608 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.67s 2025-06-11 14:44:52.917619 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.67s 2025-06-11 14:44:52.917627 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.65s 2025-06-11 14:44:54.858485 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:44:54.858629 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:44:54.858646 | orchestrator | Registering Redlock._release_script 2025-06-11 14:44:54.922837 | orchestrator | 2025-06-11 14:44:54 | INFO  | Task e0912f44-254b-4049-9635-7e712481c684 (facts) was prepared for execution. 2025-06-11 14:44:54.922914 | orchestrator | 2025-06-11 14:44:54 | INFO  | It takes a moment until task e0912f44-254b-4049-9635-7e712481c684 (facts) has been started and output is visible here. 2025-06-11 14:45:06.754986 | orchestrator | 2025-06-11 14:45:06.755086 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-11 14:45:06.755103 | orchestrator | 2025-06-11 14:45:06.755115 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-11 14:45:06.755126 | orchestrator | Wednesday 11 June 2025 14:44:58 +0000 (0:00:00.268) 0:00:00.268 ******** 2025-06-11 14:45:06.755137 | orchestrator | ok: [testbed-manager] 2025-06-11 14:45:06.755149 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:45:06.755160 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:45:06.755171 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:45:06.755182 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:45:06.755193 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:45:06.755203 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:45:06.755214 | orchestrator | 2025-06-11 14:45:06.755226 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-11 14:45:06.755237 | orchestrator | Wednesday 11 June 2025 14:45:00 +0000 (0:00:01.112) 0:00:01.380 ******** 2025-06-11 14:45:06.755248 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:45:06.755284 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:45:06.755296 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:45:06.755306 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:45:06.755317 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:45:06.755327 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:45:06.755338 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:45:06.755348 | orchestrator | 2025-06-11 14:45:06.755360 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 14:45:06.755371 | orchestrator | 2025-06-11 14:45:06.755382 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 14:45:06.755392 | orchestrator | Wednesday 11 June 2025 14:45:01 +0000 (0:00:01.207) 0:00:02.587 ******** 2025-06-11 14:45:06.755403 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:45:06.755414 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:45:06.755425 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:45:06.755436 | orchestrator | ok: [testbed-manager] 2025-06-11 14:45:06.755446 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:45:06.755457 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:45:06.755467 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:45:06.755478 | orchestrator | 2025-06-11 14:45:06.755489 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-11 14:45:06.755500 | orchestrator | 2025-06-11 14:45:06.755510 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-11 14:45:06.755521 | orchestrator | Wednesday 11 June 2025 14:45:06 +0000 (0:00:04.806) 0:00:07.394 ******** 2025-06-11 14:45:06.755532 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:45:06.755545 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:45:06.755557 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:45:06.755571 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:45:06.755583 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:45:06.755595 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:45:06.755607 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:45:06.755619 | orchestrator | 2025-06-11 14:45:06.755631 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:45:06.755645 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755658 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755670 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755683 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755695 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755708 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755720 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:45:06.755732 | orchestrator | 2025-06-11 14:45:06.755745 | orchestrator | 2025-06-11 14:45:06.755757 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:45:06.755789 | orchestrator | Wednesday 11 June 2025 14:45:06 +0000 (0:00:00.478) 0:00:07.872 ******** 2025-06-11 14:45:06.755801 | orchestrator | =============================================================================== 2025-06-11 14:45:06.755813 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-06-11 14:45:06.755825 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-06-11 14:45:06.755838 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-06-11 14:45:06.755872 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-06-11 14:45:06.907581 | orchestrator | 2025-06-11 14:45:06.908642 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Jun 11 14:45:06 UTC 2025 2025-06-11 14:45:06.908673 | orchestrator | 2025-06-11 14:45:08.364083 | orchestrator | 2025-06-11 14:45:08 | INFO  | Collection nutshell is prepared for execution 2025-06-11 14:45:08.364191 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [0] - dotfiles 2025-06-11 14:45:08.368154 | orchestrator | Registering Redlock._acquired_script 2025-06-11 14:45:08.368225 | orchestrator | Registering Redlock._extend_script 2025-06-11 14:45:08.368240 | orchestrator | Registering Redlock._release_script 2025-06-11 14:45:08.372068 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [0] - homer 2025-06-11 14:45:08.372137 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [0] - netdata 2025-06-11 14:45:08.372152 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [0] - openstackclient 2025-06-11 14:45:08.372164 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [0] - phpmyadmin 2025-06-11 14:45:08.372248 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [0] - common 2025-06-11 14:45:08.373423 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [1] -- loadbalancer 2025-06-11 14:45:08.373497 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [2] --- opensearch 2025-06-11 14:45:08.373698 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [2] --- mariadb-ng 2025-06-11 14:45:08.373721 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [3] ---- horizon 2025-06-11 14:45:08.373732 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [3] ---- keystone 2025-06-11 14:45:08.373744 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [4] ----- neutron 2025-06-11 14:45:08.373756 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ wait-for-nova 2025-06-11 14:45:08.374054 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [5] ------ octavia 2025-06-11 14:45:08.374375 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- barbican 2025-06-11 14:45:08.374395 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- designate 2025-06-11 14:45:08.374407 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- ironic 2025-06-11 14:45:08.374419 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- placement 2025-06-11 14:45:08.374639 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- magnum 2025-06-11 14:45:08.374787 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [1] -- openvswitch 2025-06-11 14:45:08.374806 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [2] --- ovn 2025-06-11 14:45:08.374919 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [1] -- memcached 2025-06-11 14:45:08.375194 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [1] -- redis 2025-06-11 14:45:08.375216 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [1] -- rabbitmq-ng 2025-06-11 14:45:08.375389 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [0] - kubernetes 2025-06-11 14:45:08.376730 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [1] -- kubeconfig 2025-06-11 14:45:08.376751 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [1] -- copy-kubeconfig 2025-06-11 14:45:08.376804 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [0] - ceph 2025-06-11 14:45:08.378188 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [1] -- ceph-pools 2025-06-11 14:45:08.378210 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [2] --- copy-ceph-keys 2025-06-11 14:45:08.378222 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [3] ---- cephclient 2025-06-11 14:45:08.378407 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-11 14:45:08.378524 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [4] ----- wait-for-keystone 2025-06-11 14:45:08.378541 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-11 14:45:08.378558 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ glance 2025-06-11 14:45:08.378569 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ cinder 2025-06-11 14:45:08.378581 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ nova 2025-06-11 14:45:08.378853 | orchestrator | 2025-06-11 14:45:08 | INFO  | A [4] ----- prometheus 2025-06-11 14:45:08.378873 | orchestrator | 2025-06-11 14:45:08 | INFO  | D [5] ------ grafana 2025-06-11 14:45:08.530508 | orchestrator | 2025-06-11 14:45:08 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-11 14:45:08.530587 | orchestrator | 2025-06-11 14:45:08 | INFO  | Tasks are running in the background 2025-06-11 14:45:10.837501 | orchestrator | 2025-06-11 14:45:10 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-11 14:45:12.957939 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:12.958730 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:12.962636 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:12.962688 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:12.962701 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:12.963428 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:12.964257 | orchestrator | 2025-06-11 14:45:12 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:12.966201 | orchestrator | 2025-06-11 14:45:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:15.999885 | orchestrator | 2025-06-11 14:45:15 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:15.999994 | orchestrator | 2025-06-11 14:45:15 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:16.000442 | orchestrator | 2025-06-11 14:45:15 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:16.000913 | orchestrator | 2025-06-11 14:45:16 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:16.001457 | orchestrator | 2025-06-11 14:45:16 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:16.001957 | orchestrator | 2025-06-11 14:45:16 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:16.002489 | orchestrator | 2025-06-11 14:45:16 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:16.002618 | orchestrator | 2025-06-11 14:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:19.035875 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:19.037383 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:19.037446 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:19.037917 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:19.040817 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:19.041194 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:19.041680 | orchestrator | 2025-06-11 14:45:19 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:19.041750 | orchestrator | 2025-06-11 14:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:22.104616 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:22.109483 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:22.109532 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:22.110834 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:22.112835 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:22.113408 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:22.115753 | orchestrator | 2025-06-11 14:45:22 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:22.115852 | orchestrator | 2025-06-11 14:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:25.154847 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:25.154940 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:25.154955 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:25.154966 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:25.154977 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:25.155004 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:25.155016 | orchestrator | 2025-06-11 14:45:25 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:25.155027 | orchestrator | 2025-06-11 14:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:28.211667 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:28.212728 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:28.213425 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:28.214536 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:28.217255 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:28.217335 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:28.217720 | orchestrator | 2025-06-11 14:45:28 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:28.217893 | orchestrator | 2025-06-11 14:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:31.257421 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:31.260026 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:31.262103 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:31.262407 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:31.264908 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state STARTED 2025-06-11 14:45:31.266232 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:31.267293 | orchestrator | 2025-06-11 14:45:31 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:31.267317 | orchestrator | 2025-06-11 14:45:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:34.320618 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:34.321565 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:34.322125 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:34.322716 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:34.326522 | orchestrator | 2025-06-11 14:45:34.326544 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-11 14:45:34.326552 | orchestrator | 2025-06-11 14:45:34.326559 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-11 14:45:34.326565 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:00.549) 0:00:00.549 ******** 2025-06-11 14:45:34.326575 | orchestrator | changed: [testbed-manager] 2025-06-11 14:45:34.326584 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:45:34.326593 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:45:34.326602 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:45:34.326611 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:45:34.326620 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:45:34.326628 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:45:34.326637 | orchestrator | 2025-06-11 14:45:34.326646 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-11 14:45:34.326655 | orchestrator | Wednesday 11 June 2025 14:45:23 +0000 (0:00:04.523) 0:00:05.072 ******** 2025-06-11 14:45:34.326665 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-11 14:45:34.326674 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-11 14:45:34.326683 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-11 14:45:34.326692 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-11 14:45:34.326701 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-11 14:45:34.326710 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-11 14:45:34.326719 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-11 14:45:34.326728 | orchestrator | 2025-06-11 14:45:34.326736 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-11 14:45:34.326745 | orchestrator | Wednesday 11 June 2025 14:45:25 +0000 (0:00:02.324) 0:00:07.396 ******** 2025-06-11 14:45:34.326794 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:24.079764', 'end': '2025-06-11 14:45:24.088938', 'delta': '0:00:00.009174', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.326823 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:24.265090', 'end': '2025-06-11 14:45:24.274225', 'delta': '0:00:00.009135', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.326833 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:24.147973', 'end': '2025-06-11 14:45:24.154358', 'delta': '0:00:00.006385', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.326857 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:24.485818', 'end': '2025-06-11 14:45:24.493888', 'delta': '0:00:00.008070', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.326866 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:24.807360', 'end': '2025-06-11 14:45:24.815634', 'delta': '0:00:00.008274', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.327044 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:25.031209', 'end': '2025-06-11 14:45:25.039163', 'delta': '0:00:00.007954', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.327059 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-11 14:45:25.387222', 'end': '2025-06-11 14:45:25.396125', 'delta': '0:00:00.008903', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-11 14:45:34.327069 | orchestrator | 2025-06-11 14:45:34.327078 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-11 14:45:34.327088 | orchestrator | Wednesday 11 June 2025 14:45:28 +0000 (0:00:02.827) 0:00:10.224 ******** 2025-06-11 14:45:34.327097 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-11 14:45:34.327107 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-11 14:45:34.327116 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-11 14:45:34.327125 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-11 14:45:34.327133 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-11 14:45:34.327142 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-11 14:45:34.327152 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-11 14:45:34.327161 | orchestrator | 2025-06-11 14:45:34.327171 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-11 14:45:34.327180 | orchestrator | Wednesday 11 June 2025 14:45:30 +0000 (0:00:01.540) 0:00:11.764 ******** 2025-06-11 14:45:34.327190 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-11 14:45:34.327199 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-11 14:45:34.327208 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-11 14:45:34.327217 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-11 14:45:34.327226 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-11 14:45:34.327235 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-11 14:45:34.327243 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-11 14:45:34.327253 | orchestrator | 2025-06-11 14:45:34.327262 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:45:34.327278 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327289 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327297 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327306 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327315 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327329 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327338 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:45:34.327347 | orchestrator | 2025-06-11 14:45:34.327356 | orchestrator | 2025-06-11 14:45:34.327365 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:45:34.327374 | orchestrator | Wednesday 11 June 2025 14:45:33 +0000 (0:00:03.163) 0:00:14.928 ******** 2025-06-11 14:45:34.327383 | orchestrator | =============================================================================== 2025-06-11 14:45:34.327392 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.52s 2025-06-11 14:45:34.327401 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.16s 2025-06-11 14:45:34.327410 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.83s 2025-06-11 14:45:34.327419 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.32s 2025-06-11 14:45:34.327428 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.54s 2025-06-11 14:45:34.327437 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task d36dbddf-a394-4494-a6eb-c8fdf72a5fd2 is in state SUCCESS 2025-06-11 14:45:34.327449 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:34.327458 | orchestrator | 2025-06-11 14:45:34 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:34.327467 | orchestrator | 2025-06-11 14:45:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:37.371811 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:37.371906 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:37.371934 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:37.371947 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:37.372683 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:37.381072 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:37.381105 | orchestrator | 2025-06-11 14:45:37 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:37.381118 | orchestrator | 2025-06-11 14:45:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:40.423386 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:40.423471 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:40.423486 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:40.423497 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:40.423508 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:40.432313 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:40.432404 | orchestrator | 2025-06-11 14:45:40 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:40.432417 | orchestrator | 2025-06-11 14:45:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:43.499676 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:43.499847 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:43.499876 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:43.499888 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:43.499899 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:43.499910 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:43.500818 | orchestrator | 2025-06-11 14:45:43 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:43.501560 | orchestrator | 2025-06-11 14:45:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:46.536808 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:46.538706 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:46.540975 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:46.541407 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:46.542465 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:46.543692 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:46.545445 | orchestrator | 2025-06-11 14:45:46 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:46.545863 | orchestrator | 2025-06-11 14:45:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:49.606869 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:49.606959 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:49.606973 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:49.608386 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:49.610218 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:49.611858 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:49.613381 | orchestrator | 2025-06-11 14:45:49 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state STARTED 2025-06-11 14:45:49.613404 | orchestrator | 2025-06-11 14:45:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:52.672980 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:52.673681 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:52.674801 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:52.676316 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:52.682624 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:52.683104 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:52.686662 | orchestrator | 2025-06-11 14:45:52 | INFO  | Task 18f6fed7-cc88-4ed8-baed-0fc48dde8f1a is in state SUCCESS 2025-06-11 14:45:52.686715 | orchestrator | 2025-06-11 14:45:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:55.720331 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:55.725740 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:55.726569 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:55.729740 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:55.731269 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:55.733123 | orchestrator | 2025-06-11 14:45:55 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:55.733144 | orchestrator | 2025-06-11 14:45:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:45:58.788538 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:45:58.789438 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:45:58.795507 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:45:58.795963 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:45:58.798336 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:45:58.800113 | orchestrator | 2025-06-11 14:45:58 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:45:58.800138 | orchestrator | 2025-06-11 14:45:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:01.843853 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:01.844154 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:01.844645 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state STARTED 2025-06-11 14:46:01.845480 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:01.846000 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:01.846498 | orchestrator | 2025-06-11 14:46:01 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:01.846683 | orchestrator | 2025-06-11 14:46:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:04.894629 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:04.894926 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:04.897639 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task f080ed64-2cb2-4f80-b535-eade9fa4467a is in state SUCCESS 2025-06-11 14:46:04.897993 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:04.898586 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:04.908390 | orchestrator | 2025-06-11 14:46:04 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:04.908440 | orchestrator | 2025-06-11 14:46:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:07.958121 | orchestrator | 2025-06-11 14:46:07 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:07.960825 | orchestrator | 2025-06-11 14:46:07 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:07.961816 | orchestrator | 2025-06-11 14:46:07 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:07.964359 | orchestrator | 2025-06-11 14:46:07 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:07.966193 | orchestrator | 2025-06-11 14:46:07 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:07.966218 | orchestrator | 2025-06-11 14:46:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:11.001359 | orchestrator | 2025-06-11 14:46:10 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:11.002848 | orchestrator | 2025-06-11 14:46:11 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:11.006455 | orchestrator | 2025-06-11 14:46:11 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:11.012085 | orchestrator | 2025-06-11 14:46:11 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:11.012148 | orchestrator | 2025-06-11 14:46:11 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:11.012171 | orchestrator | 2025-06-11 14:46:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:14.040004 | orchestrator | 2025-06-11 14:46:14 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:14.040926 | orchestrator | 2025-06-11 14:46:14 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:14.044249 | orchestrator | 2025-06-11 14:46:14 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:14.046560 | orchestrator | 2025-06-11 14:46:14 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:14.050177 | orchestrator | 2025-06-11 14:46:14 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:14.050226 | orchestrator | 2025-06-11 14:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:17.093732 | orchestrator | 2025-06-11 14:46:17 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state STARTED 2025-06-11 14:46:17.095081 | orchestrator | 2025-06-11 14:46:17 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:17.095789 | orchestrator | 2025-06-11 14:46:17 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:17.097774 | orchestrator | 2025-06-11 14:46:17 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:17.098658 | orchestrator | 2025-06-11 14:46:17 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:17.098706 | orchestrator | 2025-06-11 14:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:20.141663 | orchestrator | 2025-06-11 14:46:20 | INFO  | Task f76dd1f8-a7dc-4f1c-8781-cc46229b5945 is in state SUCCESS 2025-06-11 14:46:20.143117 | orchestrator | 2025-06-11 14:46:20.143169 | orchestrator | 2025-06-11 14:46:20.143182 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-11 14:46:20.143195 | orchestrator | 2025-06-11 14:46:20.143207 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-11 14:46:20.143225 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:00.226) 0:00:00.226 ******** 2025-06-11 14:46:20.143237 | orchestrator | ok: [testbed-manager] => { 2025-06-11 14:46:20.143251 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-11 14:46:20.143264 | orchestrator | } 2025-06-11 14:46:20.143275 | orchestrator | 2025-06-11 14:46:20.143287 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-11 14:46:20.143298 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:00.209) 0:00:00.435 ******** 2025-06-11 14:46:20.143309 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.143322 | orchestrator | 2025-06-11 14:46:20.143333 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-11 14:46:20.143342 | orchestrator | Wednesday 11 June 2025 14:45:20 +0000 (0:00:01.398) 0:00:01.833 ******** 2025-06-11 14:46:20.143352 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-11 14:46:20.143361 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-11 14:46:20.143372 | orchestrator | 2025-06-11 14:46:20.143381 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-11 14:46:20.143391 | orchestrator | Wednesday 11 June 2025 14:45:21 +0000 (0:00:01.481) 0:00:03.314 ******** 2025-06-11 14:46:20.143401 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.143410 | orchestrator | 2025-06-11 14:46:20.143448 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-11 14:46:20.143459 | orchestrator | Wednesday 11 June 2025 14:45:23 +0000 (0:00:01.531) 0:00:04.846 ******** 2025-06-11 14:46:20.143469 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.143479 | orchestrator | 2025-06-11 14:46:20.143489 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-11 14:46:20.143499 | orchestrator | Wednesday 11 June 2025 14:45:24 +0000 (0:00:01.496) 0:00:06.342 ******** 2025-06-11 14:46:20.143509 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-11 14:46:20.143519 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.143529 | orchestrator | 2025-06-11 14:46:20.143539 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-11 14:46:20.143549 | orchestrator | Wednesday 11 June 2025 14:45:48 +0000 (0:00:23.984) 0:00:30.327 ******** 2025-06-11 14:46:20.143559 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.143569 | orchestrator | 2025-06-11 14:46:20.143579 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:46:20.143590 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.143601 | orchestrator | 2025-06-11 14:46:20.143611 | orchestrator | 2025-06-11 14:46:20.143622 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:46:20.143632 | orchestrator | Wednesday 11 June 2025 14:45:50 +0000 (0:00:01.514) 0:00:31.842 ******** 2025-06-11 14:46:20.143642 | orchestrator | =============================================================================== 2025-06-11 14:46:20.143652 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.98s 2025-06-11 14:46:20.143662 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.53s 2025-06-11 14:46:20.143692 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.51s 2025-06-11 14:46:20.143703 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.50s 2025-06-11 14:46:20.143714 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.48s 2025-06-11 14:46:20.143725 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.40s 2025-06-11 14:46:20.143763 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.21s 2025-06-11 14:46:20.143775 | orchestrator | 2025-06-11 14:46:20.143790 | orchestrator | 2025-06-11 14:46:20.143806 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-11 14:46:20.143831 | orchestrator | 2025-06-11 14:46:20.143850 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-11 14:46:20.143867 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:00.314) 0:00:00.314 ******** 2025-06-11 14:46:20.143883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-11 14:46:20.143900 | orchestrator | 2025-06-11 14:46:20.143915 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-11 14:46:20.143932 | orchestrator | Wednesday 11 June 2025 14:45:19 +0000 (0:00:00.650) 0:00:00.964 ******** 2025-06-11 14:46:20.143949 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-11 14:46:20.143966 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-11 14:46:20.143985 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-11 14:46:20.144001 | orchestrator | 2025-06-11 14:46:20.144016 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-11 14:46:20.144027 | orchestrator | Wednesday 11 June 2025 14:45:21 +0000 (0:00:01.746) 0:00:02.711 ******** 2025-06-11 14:46:20.144038 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.144049 | orchestrator | 2025-06-11 14:46:20.144058 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-11 14:46:20.144068 | orchestrator | Wednesday 11 June 2025 14:45:22 +0000 (0:00:01.513) 0:00:04.225 ******** 2025-06-11 14:46:20.144092 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-11 14:46:20.144102 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.144112 | orchestrator | 2025-06-11 14:46:20.144121 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-11 14:46:20.144131 | orchestrator | Wednesday 11 June 2025 14:45:57 +0000 (0:00:34.476) 0:00:38.701 ******** 2025-06-11 14:46:20.144141 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.144150 | orchestrator | 2025-06-11 14:46:20.144159 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-11 14:46:20.144169 | orchestrator | Wednesday 11 June 2025 14:45:57 +0000 (0:00:00.921) 0:00:39.623 ******** 2025-06-11 14:46:20.144178 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.144188 | orchestrator | 2025-06-11 14:46:20.144197 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-11 14:46:20.144207 | orchestrator | Wednesday 11 June 2025 14:45:59 +0000 (0:00:01.233) 0:00:40.857 ******** 2025-06-11 14:46:20.144216 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.144226 | orchestrator | 2025-06-11 14:46:20.144235 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-11 14:46:20.144244 | orchestrator | Wednesday 11 June 2025 14:46:01 +0000 (0:00:02.148) 0:00:43.005 ******** 2025-06-11 14:46:20.144254 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.144263 | orchestrator | 2025-06-11 14:46:20.144272 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-11 14:46:20.144282 | orchestrator | Wednesday 11 June 2025 14:46:02 +0000 (0:00:00.774) 0:00:43.780 ******** 2025-06-11 14:46:20.144291 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.144310 | orchestrator | 2025-06-11 14:46:20.144320 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-11 14:46:20.144330 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:00.996) 0:00:44.777 ******** 2025-06-11 14:46:20.144339 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.144349 | orchestrator | 2025-06-11 14:46:20.144358 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:46:20.144368 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.144378 | orchestrator | 2025-06-11 14:46:20.144387 | orchestrator | 2025-06-11 14:46:20.144397 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:46:20.144406 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:00.330) 0:00:45.108 ******** 2025-06-11 14:46:20.144416 | orchestrator | =============================================================================== 2025-06-11 14:46:20.144425 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.48s 2025-06-11 14:46:20.144435 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.15s 2025-06-11 14:46:20.144444 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.75s 2025-06-11 14:46:20.144453 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.51s 2025-06-11 14:46:20.144463 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.23s 2025-06-11 14:46:20.144472 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.00s 2025-06-11 14:46:20.144482 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.92s 2025-06-11 14:46:20.144491 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2025-06-11 14:46:20.144501 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.65s 2025-06-11 14:46:20.144510 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.33s 2025-06-11 14:46:20.144519 | orchestrator | 2025-06-11 14:46:20.144529 | orchestrator | 2025-06-11 14:46:20.144538 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:46:20.144547 | orchestrator | 2025-06-11 14:46:20.144557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:46:20.144566 | orchestrator | Wednesday 11 June 2025 14:45:19 +0000 (0:00:00.588) 0:00:00.588 ******** 2025-06-11 14:46:20.144576 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-11 14:46:20.144585 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-11 14:46:20.144594 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-11 14:46:20.144604 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-11 14:46:20.144613 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-11 14:46:20.144652 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-11 14:46:20.144663 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-11 14:46:20.144672 | orchestrator | 2025-06-11 14:46:20.144682 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-11 14:46:20.144691 | orchestrator | 2025-06-11 14:46:20.144700 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-11 14:46:20.144710 | orchestrator | Wednesday 11 June 2025 14:45:21 +0000 (0:00:01.750) 0:00:02.339 ******** 2025-06-11 14:46:20.144732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:46:20.144772 | orchestrator | 2025-06-11 14:46:20.144782 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-11 14:46:20.144791 | orchestrator | Wednesday 11 June 2025 14:45:24 +0000 (0:00:02.851) 0:00:05.190 ******** 2025-06-11 14:46:20.144807 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.144817 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:46:20.144826 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:46:20.144836 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:46:20.144845 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:46:20.144861 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:46:20.144871 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:46:20.144880 | orchestrator | 2025-06-11 14:46:20.144890 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-11 14:46:20.144904 | orchestrator | Wednesday 11 June 2025 14:45:26 +0000 (0:00:02.227) 0:00:07.417 ******** 2025-06-11 14:46:20.144913 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.144923 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:46:20.144932 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:46:20.144942 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:46:20.144956 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:46:20.144972 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:46:20.144988 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:46:20.145004 | orchestrator | 2025-06-11 14:46:20.145020 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-11 14:46:20.145037 | orchestrator | Wednesday 11 June 2025 14:45:29 +0000 (0:00:03.218) 0:00:10.636 ******** 2025-06-11 14:46:20.145052 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.145068 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:46:20.145084 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:46:20.145101 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:46:20.145117 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:46:20.145133 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:46:20.145150 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:46:20.145165 | orchestrator | 2025-06-11 14:46:20.145182 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-11 14:46:20.145199 | orchestrator | Wednesday 11 June 2025 14:45:32 +0000 (0:00:02.624) 0:00:13.260 ******** 2025-06-11 14:46:20.145216 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.145231 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:46:20.145246 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:46:20.145255 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:46:20.145265 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:46:20.145274 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:46:20.145283 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:46:20.145293 | orchestrator | 2025-06-11 14:46:20.145302 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-11 14:46:20.145312 | orchestrator | Wednesday 11 June 2025 14:45:42 +0000 (0:00:10.222) 0:00:23.483 ******** 2025-06-11 14:46:20.145321 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.145331 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:46:20.145340 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:46:20.145349 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:46:20.145359 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:46:20.145368 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:46:20.145378 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:46:20.145387 | orchestrator | 2025-06-11 14:46:20.145396 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-11 14:46:20.145406 | orchestrator | Wednesday 11 June 2025 14:45:58 +0000 (0:00:15.627) 0:00:39.110 ******** 2025-06-11 14:46:20.145416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:46:20.145428 | orchestrator | 2025-06-11 14:46:20.145437 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-11 14:46:20.145447 | orchestrator | Wednesday 11 June 2025 14:45:59 +0000 (0:00:01.721) 0:00:40.832 ******** 2025-06-11 14:46:20.145456 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-11 14:46:20.145474 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-11 14:46:20.145484 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-11 14:46:20.145493 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-11 14:46:20.145503 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-11 14:46:20.145512 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-11 14:46:20.145521 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-11 14:46:20.145531 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-11 14:46:20.145540 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-11 14:46:20.145549 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-11 14:46:20.145559 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-11 14:46:20.145568 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-11 14:46:20.145578 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-11 14:46:20.145587 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-11 14:46:20.145596 | orchestrator | 2025-06-11 14:46:20.145606 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-11 14:46:20.145616 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:04.566) 0:00:45.399 ******** 2025-06-11 14:46:20.145625 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.145635 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:46:20.145644 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:46:20.145654 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:46:20.145663 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:46:20.145673 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:46:20.145682 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:46:20.145691 | orchestrator | 2025-06-11 14:46:20.145701 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-11 14:46:20.145711 | orchestrator | Wednesday 11 June 2025 14:46:05 +0000 (0:00:01.496) 0:00:46.895 ******** 2025-06-11 14:46:20.145720 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.145730 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:46:20.145765 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:46:20.145776 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:46:20.145785 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:46:20.145795 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:46:20.145804 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:46:20.145813 | orchestrator | 2025-06-11 14:46:20.145823 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-11 14:46:20.145841 | orchestrator | Wednesday 11 June 2025 14:46:07 +0000 (0:00:01.977) 0:00:48.873 ******** 2025-06-11 14:46:20.145851 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.145860 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:46:20.145870 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:46:20.145879 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:46:20.145888 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:46:20.145898 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:46:20.145912 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:46:20.145924 | orchestrator | 2025-06-11 14:46:20.145944 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-11 14:46:20.145968 | orchestrator | Wednesday 11 June 2025 14:46:09 +0000 (0:00:01.627) 0:00:50.500 ******** 2025-06-11 14:46:20.145984 | orchestrator | ok: [testbed-manager] 2025-06-11 14:46:20.145999 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:46:20.146111 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:46:20.146124 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:46:20.146134 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:46:20.146143 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:46:20.146153 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:46:20.146334 | orchestrator | 2025-06-11 14:46:20.146353 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-11 14:46:20.146375 | orchestrator | Wednesday 11 June 2025 14:46:11 +0000 (0:00:02.044) 0:00:52.545 ******** 2025-06-11 14:46:20.146385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-11 14:46:20.146397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:46:20.146408 | orchestrator | 2025-06-11 14:46:20.146417 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-11 14:46:20.146426 | orchestrator | Wednesday 11 June 2025 14:46:13 +0000 (0:00:01.497) 0:00:54.042 ******** 2025-06-11 14:46:20.146436 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.146445 | orchestrator | 2025-06-11 14:46:20.146454 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-11 14:46:20.146464 | orchestrator | Wednesday 11 June 2025 14:46:15 +0000 (0:00:02.060) 0:00:56.103 ******** 2025-06-11 14:46:20.146473 | orchestrator | changed: [testbed-manager] 2025-06-11 14:46:20.146482 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:46:20.146492 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:46:20.146501 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:46:20.146510 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:46:20.146519 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:46:20.146529 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:46:20.146538 | orchestrator | 2025-06-11 14:46:20.146548 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:46:20.146557 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146568 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146578 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146588 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146597 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146607 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146616 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:46:20.146626 | orchestrator | 2025-06-11 14:46:20.146635 | orchestrator | 2025-06-11 14:46:20.146644 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:46:20.146654 | orchestrator | Wednesday 11 June 2025 14:46:18 +0000 (0:00:03.240) 0:00:59.343 ******** 2025-06-11 14:46:20.146663 | orchestrator | =============================================================================== 2025-06-11 14:46:20.146672 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.63s 2025-06-11 14:46:20.146682 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.22s 2025-06-11 14:46:20.146691 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.57s 2025-06-11 14:46:20.146700 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.24s 2025-06-11 14:46:20.146710 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.22s 2025-06-11 14:46:20.146719 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.85s 2025-06-11 14:46:20.146728 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.62s 2025-06-11 14:46:20.146770 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.23s 2025-06-11 14:46:20.146781 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.06s 2025-06-11 14:46:20.146790 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.04s 2025-06-11 14:46:20.146800 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.98s 2025-06-11 14:46:20.146819 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.75s 2025-06-11 14:46:20.146829 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2025-06-11 14:46:20.146838 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.63s 2025-06-11 14:46:20.146854 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.50s 2025-06-11 14:46:20.146864 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.50s 2025-06-11 14:46:20.146874 | orchestrator | 2025-06-11 14:46:20 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:20.148122 | orchestrator | 2025-06-11 14:46:20 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:20.150198 | orchestrator | 2025-06-11 14:46:20 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:20.154348 | orchestrator | 2025-06-11 14:46:20 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:20.157421 | orchestrator | 2025-06-11 14:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:23.205007 | orchestrator | 2025-06-11 14:46:23 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:23.205919 | orchestrator | 2025-06-11 14:46:23 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:23.207197 | orchestrator | 2025-06-11 14:46:23 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:23.208976 | orchestrator | 2025-06-11 14:46:23 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:23.209021 | orchestrator | 2025-06-11 14:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:26.269302 | orchestrator | 2025-06-11 14:46:26 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:26.269415 | orchestrator | 2025-06-11 14:46:26 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:26.269431 | orchestrator | 2025-06-11 14:46:26 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:26.269443 | orchestrator | 2025-06-11 14:46:26 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:26.269455 | orchestrator | 2025-06-11 14:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:29.300934 | orchestrator | 2025-06-11 14:46:29 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:29.302104 | orchestrator | 2025-06-11 14:46:29 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:29.303276 | orchestrator | 2025-06-11 14:46:29 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:29.304176 | orchestrator | 2025-06-11 14:46:29 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:29.304224 | orchestrator | 2025-06-11 14:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:32.346956 | orchestrator | 2025-06-11 14:46:32 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:32.348420 | orchestrator | 2025-06-11 14:46:32 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:32.350546 | orchestrator | 2025-06-11 14:46:32 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:32.353133 | orchestrator | 2025-06-11 14:46:32 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:32.353177 | orchestrator | 2025-06-11 14:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:35.397872 | orchestrator | 2025-06-11 14:46:35 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:35.399708 | orchestrator | 2025-06-11 14:46:35 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:35.400440 | orchestrator | 2025-06-11 14:46:35 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:35.401285 | orchestrator | 2025-06-11 14:46:35 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:35.401414 | orchestrator | 2025-06-11 14:46:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:38.442489 | orchestrator | 2025-06-11 14:46:38 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:38.443444 | orchestrator | 2025-06-11 14:46:38 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:38.445596 | orchestrator | 2025-06-11 14:46:38 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:38.446562 | orchestrator | 2025-06-11 14:46:38 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:38.447143 | orchestrator | 2025-06-11 14:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:41.491829 | orchestrator | 2025-06-11 14:46:41 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:41.493073 | orchestrator | 2025-06-11 14:46:41 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:41.494861 | orchestrator | 2025-06-11 14:46:41 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:41.495536 | orchestrator | 2025-06-11 14:46:41 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:41.495725 | orchestrator | 2025-06-11 14:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:44.549249 | orchestrator | 2025-06-11 14:46:44 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:44.554239 | orchestrator | 2025-06-11 14:46:44 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:44.556612 | orchestrator | 2025-06-11 14:46:44 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:44.558128 | orchestrator | 2025-06-11 14:46:44 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:44.558163 | orchestrator | 2025-06-11 14:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:47.605216 | orchestrator | 2025-06-11 14:46:47 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:47.605633 | orchestrator | 2025-06-11 14:46:47 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:47.606229 | orchestrator | 2025-06-11 14:46:47 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:47.607575 | orchestrator | 2025-06-11 14:46:47 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:47.607598 | orchestrator | 2025-06-11 14:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:50.641615 | orchestrator | 2025-06-11 14:46:50 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:50.643229 | orchestrator | 2025-06-11 14:46:50 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:50.646281 | orchestrator | 2025-06-11 14:46:50 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:50.646943 | orchestrator | 2025-06-11 14:46:50 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:50.647153 | orchestrator | 2025-06-11 14:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:53.683674 | orchestrator | 2025-06-11 14:46:53 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:53.685125 | orchestrator | 2025-06-11 14:46:53 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state STARTED 2025-06-11 14:46:53.687067 | orchestrator | 2025-06-11 14:46:53 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:53.688304 | orchestrator | 2025-06-11 14:46:53 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:53.688356 | orchestrator | 2025-06-11 14:46:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:56.738248 | orchestrator | 2025-06-11 14:46:56 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:56.738353 | orchestrator | 2025-06-11 14:46:56 | INFO  | Task e63c78bc-2dc7-4f62-ae6b-ef4134a4c08d is in state SUCCESS 2025-06-11 14:46:56.739982 | orchestrator | 2025-06-11 14:46:56 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:56.741492 | orchestrator | 2025-06-11 14:46:56 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:56.741530 | orchestrator | 2025-06-11 14:46:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:46:59.789316 | orchestrator | 2025-06-11 14:46:59 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:46:59.790336 | orchestrator | 2025-06-11 14:46:59 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:46:59.790818 | orchestrator | 2025-06-11 14:46:59 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:46:59.790841 | orchestrator | 2025-06-11 14:46:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:02.838788 | orchestrator | 2025-06-11 14:47:02 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:02.841311 | orchestrator | 2025-06-11 14:47:02 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:02.842447 | orchestrator | 2025-06-11 14:47:02 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:02.842499 | orchestrator | 2025-06-11 14:47:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:05.887600 | orchestrator | 2025-06-11 14:47:05 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:05.888924 | orchestrator | 2025-06-11 14:47:05 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:05.889957 | orchestrator | 2025-06-11 14:47:05 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:05.890004 | orchestrator | 2025-06-11 14:47:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:08.932450 | orchestrator | 2025-06-11 14:47:08 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:08.935428 | orchestrator | 2025-06-11 14:47:08 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:08.937671 | orchestrator | 2025-06-11 14:47:08 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:08.937777 | orchestrator | 2025-06-11 14:47:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:11.979008 | orchestrator | 2025-06-11 14:47:11 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:11.979467 | orchestrator | 2025-06-11 14:47:11 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:11.979922 | orchestrator | 2025-06-11 14:47:11 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:11.979947 | orchestrator | 2025-06-11 14:47:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:15.030471 | orchestrator | 2025-06-11 14:47:15 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:15.032604 | orchestrator | 2025-06-11 14:47:15 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:15.037208 | orchestrator | 2025-06-11 14:47:15 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:15.037606 | orchestrator | 2025-06-11 14:47:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:18.075155 | orchestrator | 2025-06-11 14:47:18 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:18.075865 | orchestrator | 2025-06-11 14:47:18 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:18.076519 | orchestrator | 2025-06-11 14:47:18 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:18.077440 | orchestrator | 2025-06-11 14:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:21.116278 | orchestrator | 2025-06-11 14:47:21 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:21.118252 | orchestrator | 2025-06-11 14:47:21 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:21.121010 | orchestrator | 2025-06-11 14:47:21 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:21.121065 | orchestrator | 2025-06-11 14:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:24.175188 | orchestrator | 2025-06-11 14:47:24 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:24.178153 | orchestrator | 2025-06-11 14:47:24 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:24.179616 | orchestrator | 2025-06-11 14:47:24 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:24.179658 | orchestrator | 2025-06-11 14:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:27.246851 | orchestrator | 2025-06-11 14:47:27 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:27.247888 | orchestrator | 2025-06-11 14:47:27 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:27.249339 | orchestrator | 2025-06-11 14:47:27 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:27.249369 | orchestrator | 2025-06-11 14:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:30.295880 | orchestrator | 2025-06-11 14:47:30 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:30.296761 | orchestrator | 2025-06-11 14:47:30 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:30.298509 | orchestrator | 2025-06-11 14:47:30 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:30.298600 | orchestrator | 2025-06-11 14:47:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:33.344524 | orchestrator | 2025-06-11 14:47:33 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:33.346392 | orchestrator | 2025-06-11 14:47:33 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:33.351334 | orchestrator | 2025-06-11 14:47:33 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:33.351368 | orchestrator | 2025-06-11 14:47:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:36.403678 | orchestrator | 2025-06-11 14:47:36 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:36.405050 | orchestrator | 2025-06-11 14:47:36 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:36.406631 | orchestrator | 2025-06-11 14:47:36 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:36.406769 | orchestrator | 2025-06-11 14:47:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:39.449747 | orchestrator | 2025-06-11 14:47:39 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:39.451135 | orchestrator | 2025-06-11 14:47:39 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:39.452441 | orchestrator | 2025-06-11 14:47:39 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:39.453064 | orchestrator | 2025-06-11 14:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:42.513210 | orchestrator | 2025-06-11 14:47:42 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:42.515089 | orchestrator | 2025-06-11 14:47:42 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:42.520989 | orchestrator | 2025-06-11 14:47:42 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:42.521033 | orchestrator | 2025-06-11 14:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:45.567638 | orchestrator | 2025-06-11 14:47:45 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:45.568996 | orchestrator | 2025-06-11 14:47:45 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:45.572098 | orchestrator | 2025-06-11 14:47:45 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:45.573044 | orchestrator | 2025-06-11 14:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:48.624233 | orchestrator | 2025-06-11 14:47:48 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:48.626203 | orchestrator | 2025-06-11 14:47:48 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state STARTED 2025-06-11 14:47:48.628100 | orchestrator | 2025-06-11 14:47:48 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:48.628129 | orchestrator | 2025-06-11 14:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:51.671460 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:47:51.673872 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:51.682000 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task d5c0809a-32c3-4ccb-acc0-77e67a78a3a8 is in state SUCCESS 2025-06-11 14:47:51.682154 | orchestrator | 2025-06-11 14:47:51.682173 | orchestrator | 2025-06-11 14:47:51.682185 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-11 14:47:51.682196 | orchestrator | 2025-06-11 14:47:51.682207 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-11 14:47:51.682218 | orchestrator | Wednesday 11 June 2025 14:45:39 +0000 (0:00:00.205) 0:00:00.205 ******** 2025-06-11 14:47:51.682229 | orchestrator | ok: [testbed-manager] 2025-06-11 14:47:51.682241 | orchestrator | 2025-06-11 14:47:51.682252 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-11 14:47:51.682263 | orchestrator | Wednesday 11 June 2025 14:45:40 +0000 (0:00:00.713) 0:00:00.919 ******** 2025-06-11 14:47:51.682274 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-11 14:47:51.682285 | orchestrator | 2025-06-11 14:47:51.682296 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-11 14:47:51.682307 | orchestrator | Wednesday 11 June 2025 14:45:40 +0000 (0:00:00.622) 0:00:01.542 ******** 2025-06-11 14:47:51.682317 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.682328 | orchestrator | 2025-06-11 14:47:51.682339 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-11 14:47:51.682349 | orchestrator | Wednesday 11 June 2025 14:45:42 +0000 (0:00:01.419) 0:00:02.961 ******** 2025-06-11 14:47:51.682360 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-11 14:47:51.682374 | orchestrator | ok: [testbed-manager] 2025-06-11 14:47:51.682391 | orchestrator | 2025-06-11 14:47:51.682407 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-11 14:47:51.682445 | orchestrator | Wednesday 11 June 2025 14:46:46 +0000 (0:01:04.247) 0:01:07.209 ******** 2025-06-11 14:47:51.682465 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.682483 | orchestrator | 2025-06-11 14:47:51.682502 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:47:51.682521 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:47:51.682541 | orchestrator | 2025-06-11 14:47:51.682561 | orchestrator | 2025-06-11 14:47:51.682646 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:47:51.682670 | orchestrator | Wednesday 11 June 2025 14:46:54 +0000 (0:00:08.311) 0:01:15.521 ******** 2025-06-11 14:47:51.682692 | orchestrator | =============================================================================== 2025-06-11 14:47:51.682846 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 64.25s 2025-06-11 14:47:51.682869 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.31s 2025-06-11 14:47:51.682889 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.42s 2025-06-11 14:47:51.682908 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.71s 2025-06-11 14:47:51.682928 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2025-06-11 14:47:51.682949 | orchestrator | 2025-06-11 14:47:51.684791 | orchestrator | 2025-06-11 14:47:51.684837 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-11 14:47:51.684850 | orchestrator | 2025-06-11 14:47:51.684861 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-11 14:47:51.684872 | orchestrator | Wednesday 11 June 2025 14:45:13 +0000 (0:00:00.290) 0:00:00.290 ******** 2025-06-11 14:47:51.684883 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:47:51.684895 | orchestrator | 2025-06-11 14:47:51.684906 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-11 14:47:51.685005 | orchestrator | Wednesday 11 June 2025 14:45:14 +0000 (0:00:01.288) 0:00:01.579 ******** 2025-06-11 14:47:51.685098 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685141 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685152 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685163 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685174 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685184 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685195 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685205 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685216 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685238 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685258 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685277 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685297 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-11 14:47:51.685311 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685329 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685347 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685366 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685385 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-11 14:47:51.685406 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685426 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685442 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-11 14:47:51.685454 | orchestrator | 2025-06-11 14:47:51.685467 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-11 14:47:51.685479 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:04.048) 0:00:05.627 ******** 2025-06-11 14:47:51.685499 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:47:51.685513 | orchestrator | 2025-06-11 14:47:51.685531 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-11 14:47:51.685549 | orchestrator | Wednesday 11 June 2025 14:45:19 +0000 (0:00:01.320) 0:00:06.948 ******** 2025-06-11 14:47:51.685628 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685747 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.685841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685854 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.685997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.686114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.686143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.686164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.686185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.686206 | orchestrator | 2025-06-11 14:47:51.686227 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-11 14:47:51.686248 | orchestrator | Wednesday 11 June 2025 14:45:24 +0000 (0:00:04.701) 0:00:11.650 ******** 2025-06-11 14:47:51.686267 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686285 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686297 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686317 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:47:51.686329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686379 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:47:51.686390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686510 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:47:51.686521 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:47:51.686532 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:47:51.686543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686588 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:47:51.686599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686640 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:47:51.686651 | orchestrator | 2025-06-11 14:47:51.686662 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-11 14:47:51.686673 | orchestrator | Wednesday 11 June 2025 14:45:25 +0000 (0:00:01.332) 0:00:12.983 ******** 2025-06-11 14:47:51.686684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686780 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686810 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:47:51.686829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.686964 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:47:51.686975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.686995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687022 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:47:51.687033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.687051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.687086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687111 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:47:51.687121 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:47:51.687131 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:47:51.687141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-11 14:47:51.687155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.687175 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:47:51.687184 | orchestrator | 2025-06-11 14:47:51.687194 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-11 14:47:51.687204 | orchestrator | Wednesday 11 June 2025 14:45:28 +0000 (0:00:02.810) 0:00:15.793 ******** 2025-06-11 14:47:51.687213 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:47:51.687222 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:47:51.687232 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:47:51.687241 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:47:51.687251 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:47:51.687265 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:47:51.687275 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:47:51.687285 | orchestrator | 2025-06-11 14:47:51.687295 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-11 14:47:51.687304 | orchestrator | Wednesday 11 June 2025 14:45:29 +0000 (0:00:00.873) 0:00:16.667 ******** 2025-06-11 14:47:51.687314 | orchestrator | skipping: [testbed-manager] 2025-06-11 14:47:51.687323 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:47:51.687333 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:47:51.687342 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:47:51.687352 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:47:51.687361 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:47:51.687370 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:47:51.687380 | orchestrator | 2025-06-11 14:47:51.687389 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-11 14:47:51.687399 | orchestrator | Wednesday 11 June 2025 14:45:30 +0000 (0:00:01.263) 0:00:17.931 ******** 2025-06-11 14:47:51.687409 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687450 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.687575 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.687762 | orchestrator | 2025-06-11 14:47:51.687772 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-11 14:47:51.687782 | orchestrator | Wednesday 11 June 2025 14:45:35 +0000 (0:00:05.177) 0:00:23.108 ******** 2025-06-11 14:47:51.687791 | orchestrator | [WARNING]: Skipped 2025-06-11 14:47:51.687802 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-11 14:47:51.687812 | orchestrator | to this access issue: 2025-06-11 14:47:51.687821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-11 14:47:51.687831 | orchestrator | directory 2025-06-11 14:47:51.687840 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:47:51.687850 | orchestrator | 2025-06-11 14:47:51.687859 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-11 14:47:51.687869 | orchestrator | Wednesday 11 June 2025 14:45:37 +0000 (0:00:01.795) 0:00:24.904 ******** 2025-06-11 14:47:51.687884 | orchestrator | [WARNING]: Skipped 2025-06-11 14:47:51.687894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-11 14:47:51.687909 | orchestrator | to this access issue: 2025-06-11 14:47:51.687919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-11 14:47:51.687929 | orchestrator | directory 2025-06-11 14:47:51.687938 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:47:51.687948 | orchestrator | 2025-06-11 14:47:51.687957 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-11 14:47:51.687967 | orchestrator | Wednesday 11 June 2025 14:45:38 +0000 (0:00:01.024) 0:00:25.928 ******** 2025-06-11 14:47:51.687976 | orchestrator | [WARNING]: Skipped 2025-06-11 14:47:51.687986 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-11 14:47:51.687995 | orchestrator | to this access issue: 2025-06-11 14:47:51.688005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-11 14:47:51.688014 | orchestrator | directory 2025-06-11 14:47:51.688023 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:47:51.688033 | orchestrator | 2025-06-11 14:47:51.688042 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-11 14:47:51.688052 | orchestrator | Wednesday 11 June 2025 14:45:39 +0000 (0:00:00.749) 0:00:26.678 ******** 2025-06-11 14:47:51.688061 | orchestrator | [WARNING]: Skipped 2025-06-11 14:47:51.688071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-11 14:47:51.688080 | orchestrator | to this access issue: 2025-06-11 14:47:51.688089 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-11 14:47:51.688099 | orchestrator | directory 2025-06-11 14:47:51.688108 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 14:47:51.688118 | orchestrator | 2025-06-11 14:47:51.688127 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-11 14:47:51.688136 | orchestrator | Wednesday 11 June 2025 14:45:40 +0000 (0:00:00.778) 0:00:27.456 ******** 2025-06-11 14:47:51.688146 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.688155 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.688164 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.688175 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.688191 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.688208 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.688225 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.688242 | orchestrator | 2025-06-11 14:47:51.688254 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-11 14:47:51.688264 | orchestrator | Wednesday 11 June 2025 14:45:44 +0000 (0:00:03.987) 0:00:31.443 ******** 2025-06-11 14:47:51.688272 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688296 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688304 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688320 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-11 14:47:51.688327 | orchestrator | 2025-06-11 14:47:51.688335 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-11 14:47:51.688343 | orchestrator | Wednesday 11 June 2025 14:45:46 +0000 (0:00:02.616) 0:00:34.060 ******** 2025-06-11 14:47:51.688356 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.688368 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.688376 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.688383 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.688391 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.688399 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.688406 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.688414 | orchestrator | 2025-06-11 14:47:51.688422 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-11 14:47:51.688429 | orchestrator | Wednesday 11 June 2025 14:45:49 +0000 (0:00:02.848) 0:00:36.908 ******** 2025-06-11 14:47:51.688438 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688476 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688551 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688580 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688588 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688609 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:47:51.688629 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688638 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688646 | orchestrator | 2025-06-11 14:47:51.688654 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-11 14:47:51.688666 | orchestrator | Wednesday 11 June 2025 14:45:52 +0000 (0:00:02.848) 0:00:39.757 ******** 2025-06-11 14:47:51.688674 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688682 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688711 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688719 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688727 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688734 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-11 14:47:51.688742 | orchestrator | 2025-06-11 14:47:51.688750 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-11 14:47:51.688758 | orchestrator | Wednesday 11 June 2025 14:45:54 +0000 (0:00:02.089) 0:00:41.847 ******** 2025-06-11 14:47:51.688766 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688785 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688793 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688800 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688808 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688815 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-11 14:47:51.688823 | orchestrator | 2025-06-11 14:47:51.688831 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-11 14:47:51.688838 | orchestrator | Wednesday 11 June 2025 14:45:57 +0000 (0:00:02.662) 0:00:44.509 ******** 2025-06-11 14:47:51.688847 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688893 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688934 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-11 14:47:51.688988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.688996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:47:51.689064 | orchestrator | 2025-06-11 14:47:51.689072 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-11 14:47:51.689080 | orchestrator | Wednesday 11 June 2025 14:46:00 +0000 (0:00:03.656) 0:00:48.165 ******** 2025-06-11 14:47:51.689088 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.689096 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.689103 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.689111 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.689119 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.689127 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.689134 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.689142 | orchestrator | 2025-06-11 14:47:51.689149 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-11 14:47:51.689157 | orchestrator | Wednesday 11 June 2025 14:46:02 +0000 (0:00:01.698) 0:00:49.864 ******** 2025-06-11 14:47:51.689165 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.689173 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.689180 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.689192 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.689213 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.689222 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.689229 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.689237 | orchestrator | 2025-06-11 14:47:51.689245 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689253 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:01.186) 0:00:51.050 ******** 2025-06-11 14:47:51.689260 | orchestrator | 2025-06-11 14:47:51.689268 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689276 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:00.159) 0:00:51.209 ******** 2025-06-11 14:47:51.689283 | orchestrator | 2025-06-11 14:47:51.689291 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689299 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:00.047) 0:00:51.257 ******** 2025-06-11 14:47:51.689306 | orchestrator | 2025-06-11 14:47:51.689314 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689322 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:00.067) 0:00:51.325 ******** 2025-06-11 14:47:51.689330 | orchestrator | 2025-06-11 14:47:51.689337 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689350 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:00.057) 0:00:51.382 ******** 2025-06-11 14:47:51.689358 | orchestrator | 2025-06-11 14:47:51.689366 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689373 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:00.059) 0:00:51.442 ******** 2025-06-11 14:47:51.689381 | orchestrator | 2025-06-11 14:47:51.689389 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-11 14:47:51.689396 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:00.055) 0:00:51.497 ******** 2025-06-11 14:47:51.689404 | orchestrator | 2025-06-11 14:47:51.689412 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-11 14:47:51.689424 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:00.079) 0:00:51.577 ******** 2025-06-11 14:47:51.689432 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.689440 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.689448 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.689455 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.689463 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.689471 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.689479 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.689486 | orchestrator | 2025-06-11 14:47:51.689494 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-11 14:47:51.689502 | orchestrator | Wednesday 11 June 2025 14:46:47 +0000 (0:00:42.841) 0:01:34.419 ******** 2025-06-11 14:47:51.689510 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.689518 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.689525 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.689533 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.689540 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.689548 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.689556 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.689563 | orchestrator | 2025-06-11 14:47:51.689571 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-11 14:47:51.689579 | orchestrator | Wednesday 11 June 2025 14:47:37 +0000 (0:00:50.115) 0:02:24.534 ******** 2025-06-11 14:47:51.689587 | orchestrator | ok: [testbed-manager] 2025-06-11 14:47:51.689594 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:47:51.689602 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:47:51.689610 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:47:51.689618 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:47:51.689625 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:47:51.689633 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:47:51.689641 | orchestrator | 2025-06-11 14:47:51.689648 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-11 14:47:51.689656 | orchestrator | Wednesday 11 June 2025 14:47:39 +0000 (0:00:02.251) 0:02:26.785 ******** 2025-06-11 14:47:51.689664 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:47:51.689672 | orchestrator | changed: [testbed-manager] 2025-06-11 14:47:51.689679 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:47:51.689687 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:47:51.689711 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:47:51.689719 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:47:51.689727 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:47:51.689735 | orchestrator | 2025-06-11 14:47:51.689743 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:47:51.689751 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689759 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689767 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689780 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689788 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689796 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689808 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-11 14:47:51.689816 | orchestrator | 2025-06-11 14:47:51.689824 | orchestrator | 2025-06-11 14:47:51.689832 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:47:51.689839 | orchestrator | Wednesday 11 June 2025 14:47:48 +0000 (0:00:09.038) 0:02:35.823 ******** 2025-06-11 14:47:51.689847 | orchestrator | =============================================================================== 2025-06-11 14:47:51.689855 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 50.12s 2025-06-11 14:47:51.689863 | orchestrator | common : Restart fluentd container ------------------------------------- 42.84s 2025-06-11 14:47:51.689870 | orchestrator | common : Restart cron container ----------------------------------------- 9.04s 2025-06-11 14:47:51.689878 | orchestrator | common : Copying over config.json files for services -------------------- 5.18s 2025-06-11 14:47:51.689886 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.70s 2025-06-11 14:47:51.689893 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.05s 2025-06-11 14:47:51.689901 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.99s 2025-06-11 14:47:51.689909 | orchestrator | common : Check common containers ---------------------------------------- 3.66s 2025-06-11 14:47:51.689916 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.85s 2025-06-11 14:47:51.689924 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.85s 2025-06-11 14:47:51.689932 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.81s 2025-06-11 14:47:51.689940 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.66s 2025-06-11 14:47:51.689947 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.62s 2025-06-11 14:47:51.689955 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.25s 2025-06-11 14:47:51.689967 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.09s 2025-06-11 14:47:51.689976 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.80s 2025-06-11 14:47:51.689983 | orchestrator | common : Creating log volume -------------------------------------------- 1.70s 2025-06-11 14:47:51.689991 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.33s 2025-06-11 14:47:51.689999 | orchestrator | common : include_tasks -------------------------------------------------- 1.32s 2025-06-11 14:47:51.690006 | orchestrator | common : include_tasks -------------------------------------------------- 1.29s 2025-06-11 14:47:51.690051 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:47:51.690176 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:51.690188 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:47:51.691335 | orchestrator | 2025-06-11 14:47:51 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:47:51.691518 | orchestrator | 2025-06-11 14:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:54.719909 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:47:54.720585 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:54.721155 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:47:54.722275 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:54.722815 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:47:54.723271 | orchestrator | 2025-06-11 14:47:54 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:47:54.723359 | orchestrator | 2025-06-11 14:47:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:47:57.765578 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:47:57.765673 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:47:57.765688 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:47:57.767125 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:47:57.775976 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:47:57.776002 | orchestrator | 2025-06-11 14:47:57 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:47:57.776029 | orchestrator | 2025-06-11 14:47:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:00.809525 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:48:00.809610 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:00.809626 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:00.809638 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:00.809920 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:00.810485 | orchestrator | 2025-06-11 14:48:00 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:00.810512 | orchestrator | 2025-06-11 14:48:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:03.846191 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:48:03.846292 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:03.849498 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:03.851233 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:03.851260 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:03.851271 | orchestrator | 2025-06-11 14:48:03 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:03.851282 | orchestrator | 2025-06-11 14:48:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:06.880448 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state STARTED 2025-06-11 14:48:06.886904 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:06.887222 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:06.887717 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:06.888271 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:06.888844 | orchestrator | 2025-06-11 14:48:06 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:06.888867 | orchestrator | 2025-06-11 14:48:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:09.924096 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task f93f941d-4c8b-494b-abfe-f3c3d434a203 is in state SUCCESS 2025-06-11 14:48:09.925027 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:09.925730 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:09.926368 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:09.927378 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:09.928336 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:09.929073 | orchestrator | 2025-06-11 14:48:09 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:09.929174 | orchestrator | 2025-06-11 14:48:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:12.978924 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:12.979294 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:12.979858 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:12.980495 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:12.981578 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:12.983776 | orchestrator | 2025-06-11 14:48:12 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:12.985279 | orchestrator | 2025-06-11 14:48:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:16.013821 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:16.014337 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:16.014753 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:16.016873 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:16.017311 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:16.022603 | orchestrator | 2025-06-11 14:48:16 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:16.023074 | orchestrator | 2025-06-11 14:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:19.070554 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:19.071574 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:19.072720 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:19.073649 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:19.074415 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:19.075356 | orchestrator | 2025-06-11 14:48:19 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:19.075398 | orchestrator | 2025-06-11 14:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:22.124852 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:22.127309 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:22.129611 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:22.131557 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:22.133704 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state STARTED 2025-06-11 14:48:22.134953 | orchestrator | 2025-06-11 14:48:22 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:22.135155 | orchestrator | 2025-06-11 14:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:25.201809 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:25.201929 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:25.201946 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:25.204806 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:25.205660 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task 56395375-1b5d-4c17-a361-908bea58c0ad is in state SUCCESS 2025-06-11 14:48:25.207826 | orchestrator | 2025-06-11 14:48:25.207887 | orchestrator | 2025-06-11 14:48:25.207901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:48:25.207915 | orchestrator | 2025-06-11 14:48:25.207934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:48:25.207950 | orchestrator | Wednesday 11 June 2025 14:47:55 +0000 (0:00:00.487) 0:00:00.487 ******** 2025-06-11 14:48:25.207967 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:48:25.207986 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:48:25.208003 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:48:25.208023 | orchestrator | 2025-06-11 14:48:25.208042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:48:25.208061 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:00.610) 0:00:01.097 ******** 2025-06-11 14:48:25.208081 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-11 14:48:25.208101 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-11 14:48:25.208120 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-11 14:48:25.208169 | orchestrator | 2025-06-11 14:48:25.208188 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-11 14:48:25.208207 | orchestrator | 2025-06-11 14:48:25.208225 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-11 14:48:25.208255 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:00.667) 0:00:01.765 ******** 2025-06-11 14:48:25.208274 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:48:25.208293 | orchestrator | 2025-06-11 14:48:25.208312 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-11 14:48:25.208329 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:01.040) 0:00:02.805 ******** 2025-06-11 14:48:25.208348 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-11 14:48:25.208367 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-11 14:48:25.208387 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-11 14:48:25.208405 | orchestrator | 2025-06-11 14:48:25.208417 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-11 14:48:25.208430 | orchestrator | Wednesday 11 June 2025 14:47:58 +0000 (0:00:00.937) 0:00:03.742 ******** 2025-06-11 14:48:25.208449 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-11 14:48:25.208469 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-11 14:48:25.208481 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-11 14:48:25.208493 | orchestrator | 2025-06-11 14:48:25.208505 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-11 14:48:25.208517 | orchestrator | Wednesday 11 June 2025 14:48:01 +0000 (0:00:02.429) 0:00:06.172 ******** 2025-06-11 14:48:25.208529 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:25.208540 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:25.208552 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:25.208564 | orchestrator | 2025-06-11 14:48:25.208576 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-11 14:48:25.208588 | orchestrator | Wednesday 11 June 2025 14:48:03 +0000 (0:00:02.463) 0:00:08.635 ******** 2025-06-11 14:48:25.208599 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:25.208610 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:25.208622 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:25.208633 | orchestrator | 2025-06-11 14:48:25.208645 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:48:25.208657 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.208671 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.208721 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.208741 | orchestrator | 2025-06-11 14:48:25.208757 | orchestrator | 2025-06-11 14:48:25.208776 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:48:25.208794 | orchestrator | Wednesday 11 June 2025 14:48:06 +0000 (0:00:02.764) 0:00:11.400 ******** 2025-06-11 14:48:25.208812 | orchestrator | =============================================================================== 2025-06-11 14:48:25.208830 | orchestrator | memcached : Restart memcached container --------------------------------- 2.76s 2025-06-11 14:48:25.208844 | orchestrator | memcached : Check memcached container ----------------------------------- 2.46s 2025-06-11 14:48:25.208863 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.43s 2025-06-11 14:48:25.208879 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.04s 2025-06-11 14:48:25.208896 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.94s 2025-06-11 14:48:25.208915 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-06-11 14:48:25.208935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2025-06-11 14:48:25.208945 | orchestrator | 2025-06-11 14:48:25.208956 | orchestrator | 2025-06-11 14:48:25.208967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:48:25.208977 | orchestrator | 2025-06-11 14:48:25.208987 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:48:25.208998 | orchestrator | Wednesday 11 June 2025 14:47:55 +0000 (0:00:00.482) 0:00:00.482 ******** 2025-06-11 14:48:25.209008 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:48:25.209019 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:48:25.209029 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:48:25.209040 | orchestrator | 2025-06-11 14:48:25.209051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:48:25.209078 | orchestrator | Wednesday 11 June 2025 14:47:55 +0000 (0:00:00.413) 0:00:00.896 ******** 2025-06-11 14:48:25.209090 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-11 14:48:25.209100 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-11 14:48:25.209111 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-11 14:48:25.209122 | orchestrator | 2025-06-11 14:48:25.209132 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-11 14:48:25.209143 | orchestrator | 2025-06-11 14:48:25.209153 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-11 14:48:25.209163 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:00.681) 0:00:01.577 ******** 2025-06-11 14:48:25.209174 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:48:25.209184 | orchestrator | 2025-06-11 14:48:25.209195 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-11 14:48:25.209205 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:00.789) 0:00:02.367 ******** 2025-06-11 14:48:25.209219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209318 | orchestrator | 2025-06-11 14:48:25.209329 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-11 14:48:25.209340 | orchestrator | Wednesday 11 June 2025 14:47:59 +0000 (0:00:01.883) 0:00:04.251 ******** 2025-06-11 14:48:25.209355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209436 | orchestrator | 2025-06-11 14:48:25.209447 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-11 14:48:25.209458 | orchestrator | Wednesday 11 June 2025 14:48:02 +0000 (0:00:03.514) 0:00:07.765 ******** 2025-06-11 14:48:25.209469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209548 | orchestrator | 2025-06-11 14:48:25.209564 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-11 14:48:25.209576 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:03.101) 0:00:10.867 ******** 2025-06-11 14:48:25.209587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-11 14:48:25.209665 | orchestrator | 2025-06-11 14:48:25.209740 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-11 14:48:25.209755 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:01.698) 0:00:12.566 ******** 2025-06-11 14:48:25.209766 | orchestrator | 2025-06-11 14:48:25.209777 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-11 14:48:25.209794 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.062) 0:00:12.629 ******** 2025-06-11 14:48:25.209805 | orchestrator | 2025-06-11 14:48:25.209816 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-11 14:48:25.209826 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.056) 0:00:12.686 ******** 2025-06-11 14:48:25.209837 | orchestrator | 2025-06-11 14:48:25.209847 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-11 14:48:25.209858 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.110) 0:00:12.796 ******** 2025-06-11 14:48:25.209868 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:25.209879 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:25.209889 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:25.209900 | orchestrator | 2025-06-11 14:48:25.209910 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-11 14:48:25.209921 | orchestrator | Wednesday 11 June 2025 14:48:17 +0000 (0:00:09.218) 0:00:22.015 ******** 2025-06-11 14:48:25.209931 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:25.209942 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:25.209952 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:25.209963 | orchestrator | 2025-06-11 14:48:25.209973 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:48:25.209989 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.210001 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.210083 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:48:25.210098 | orchestrator | 2025-06-11 14:48:25.210109 | orchestrator | 2025-06-11 14:48:25.210120 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:48:25.210131 | orchestrator | Wednesday 11 June 2025 14:48:22 +0000 (0:00:05.133) 0:00:27.149 ******** 2025-06-11 14:48:25.210141 | orchestrator | =============================================================================== 2025-06-11 14:48:25.210152 | orchestrator | redis : Restart redis container ----------------------------------------- 9.22s 2025-06-11 14:48:25.210162 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.13s 2025-06-11 14:48:25.210173 | orchestrator | redis : Copying over default config.json files -------------------------- 3.51s 2025-06-11 14:48:25.210183 | orchestrator | redis : Copying over redis config files --------------------------------- 3.10s 2025-06-11 14:48:25.210194 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.88s 2025-06-11 14:48:25.210205 | orchestrator | redis : Check redis containers ------------------------------------------ 1.70s 2025-06-11 14:48:25.210215 | orchestrator | redis : include_tasks --------------------------------------------------- 0.79s 2025-06-11 14:48:25.210226 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-11 14:48:25.210236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-06-11 14:48:25.210246 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2025-06-11 14:48:25.210256 | orchestrator | 2025-06-11 14:48:25 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:25.210266 | orchestrator | 2025-06-11 14:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:28.247264 | orchestrator | 2025-06-11 14:48:28 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:28.247733 | orchestrator | 2025-06-11 14:48:28 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:28.248538 | orchestrator | 2025-06-11 14:48:28 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:28.251809 | orchestrator | 2025-06-11 14:48:28 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:28.252617 | orchestrator | 2025-06-11 14:48:28 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:28.252884 | orchestrator | 2025-06-11 14:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:31.282012 | orchestrator | 2025-06-11 14:48:31 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:31.283691 | orchestrator | 2025-06-11 14:48:31 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:31.284593 | orchestrator | 2025-06-11 14:48:31 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:31.285697 | orchestrator | 2025-06-11 14:48:31 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:31.287506 | orchestrator | 2025-06-11 14:48:31 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:31.287730 | orchestrator | 2025-06-11 14:48:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:34.327566 | orchestrator | 2025-06-11 14:48:34 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:34.327822 | orchestrator | 2025-06-11 14:48:34 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:34.328336 | orchestrator | 2025-06-11 14:48:34 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:34.328905 | orchestrator | 2025-06-11 14:48:34 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:34.329689 | orchestrator | 2025-06-11 14:48:34 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:34.329725 | orchestrator | 2025-06-11 14:48:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:37.361942 | orchestrator | 2025-06-11 14:48:37 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:37.362288 | orchestrator | 2025-06-11 14:48:37 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:37.365699 | orchestrator | 2025-06-11 14:48:37 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:37.366414 | orchestrator | 2025-06-11 14:48:37 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:37.368975 | orchestrator | 2025-06-11 14:48:37 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:37.369088 | orchestrator | 2025-06-11 14:48:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:40.397517 | orchestrator | 2025-06-11 14:48:40 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:40.397914 | orchestrator | 2025-06-11 14:48:40 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:40.398793 | orchestrator | 2025-06-11 14:48:40 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:40.401880 | orchestrator | 2025-06-11 14:48:40 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:40.402645 | orchestrator | 2025-06-11 14:48:40 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:40.402690 | orchestrator | 2025-06-11 14:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:43.440489 | orchestrator | 2025-06-11 14:48:43 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:43.444362 | orchestrator | 2025-06-11 14:48:43 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:43.446952 | orchestrator | 2025-06-11 14:48:43 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:43.447625 | orchestrator | 2025-06-11 14:48:43 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:43.448782 | orchestrator | 2025-06-11 14:48:43 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:43.448807 | orchestrator | 2025-06-11 14:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:46.491899 | orchestrator | 2025-06-11 14:48:46 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:46.493588 | orchestrator | 2025-06-11 14:48:46 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:46.494544 | orchestrator | 2025-06-11 14:48:46 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:46.496108 | orchestrator | 2025-06-11 14:48:46 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:46.497423 | orchestrator | 2025-06-11 14:48:46 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:46.497817 | orchestrator | 2025-06-11 14:48:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:49.537600 | orchestrator | 2025-06-11 14:48:49 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:49.537804 | orchestrator | 2025-06-11 14:48:49 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:49.537822 | orchestrator | 2025-06-11 14:48:49 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:49.539165 | orchestrator | 2025-06-11 14:48:49 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:49.539988 | orchestrator | 2025-06-11 14:48:49 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:49.540026 | orchestrator | 2025-06-11 14:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:52.590230 | orchestrator | 2025-06-11 14:48:52 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:52.590408 | orchestrator | 2025-06-11 14:48:52 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:52.591038 | orchestrator | 2025-06-11 14:48:52 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:52.591718 | orchestrator | 2025-06-11 14:48:52 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:52.592516 | orchestrator | 2025-06-11 14:48:52 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:52.592539 | orchestrator | 2025-06-11 14:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:55.629477 | orchestrator | 2025-06-11 14:48:55 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:55.629546 | orchestrator | 2025-06-11 14:48:55 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:55.629587 | orchestrator | 2025-06-11 14:48:55 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:55.630108 | orchestrator | 2025-06-11 14:48:55 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state STARTED 2025-06-11 14:48:55.630624 | orchestrator | 2025-06-11 14:48:55 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:55.630721 | orchestrator | 2025-06-11 14:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:48:58.671103 | orchestrator | 2025-06-11 14:48:58 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:48:58.671887 | orchestrator | 2025-06-11 14:48:58 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:48:58.673018 | orchestrator | 2025-06-11 14:48:58 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:48:58.674363 | orchestrator | 2025-06-11 14:48:58 | INFO  | Task 6a5218a3-8418-454f-aa4a-1b30e43cd290 is in state SUCCESS 2025-06-11 14:48:58.674420 | orchestrator | 2025-06-11 14:48:58.676009 | orchestrator | 2025-06-11 14:48:58.676036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:48:58.676045 | orchestrator | 2025-06-11 14:48:58.676053 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:48:58.676061 | orchestrator | Wednesday 11 June 2025 14:47:55 +0000 (0:00:00.213) 0:00:00.213 ******** 2025-06-11 14:48:58.676069 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:48:58.676079 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:48:58.676087 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:48:58.676095 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:48:58.676102 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:48:58.676110 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:48:58.676118 | orchestrator | 2025-06-11 14:48:58.676126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:48:58.676134 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:01.248) 0:00:01.461 ******** 2025-06-11 14:48:58.676163 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676179 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676194 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676210 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676223 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676231 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-11 14:48:58.676239 | orchestrator | 2025-06-11 14:48:58.676247 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-11 14:48:58.676254 | orchestrator | 2025-06-11 14:48:58.676262 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-11 14:48:58.676270 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:01.014) 0:00:02.476 ******** 2025-06-11 14:48:58.676278 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:48:58.676287 | orchestrator | 2025-06-11 14:48:58.676295 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-11 14:48:58.676303 | orchestrator | Wednesday 11 June 2025 14:47:59 +0000 (0:00:01.738) 0:00:04.215 ******** 2025-06-11 14:48:58.676311 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-11 14:48:58.676319 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-11 14:48:58.676327 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-11 14:48:58.676335 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-11 14:48:58.676343 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-11 14:48:58.676350 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-11 14:48:58.676358 | orchestrator | 2025-06-11 14:48:58.676366 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-11 14:48:58.676374 | orchestrator | Wednesday 11 June 2025 14:48:01 +0000 (0:00:02.417) 0:00:06.632 ******** 2025-06-11 14:48:58.676381 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-11 14:48:58.676389 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-11 14:48:58.676397 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-11 14:48:58.676405 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-11 14:48:58.676412 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-11 14:48:58.676420 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-11 14:48:58.676427 | orchestrator | 2025-06-11 14:48:58.676435 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-11 14:48:58.676443 | orchestrator | Wednesday 11 June 2025 14:48:03 +0000 (0:00:02.187) 0:00:08.820 ******** 2025-06-11 14:48:58.676451 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-11 14:48:58.676458 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:48:58.676466 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-11 14:48:58.676475 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:48:58.676484 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-11 14:48:58.676498 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:48:58.676512 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-11 14:48:58.676523 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:48:58.676531 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-11 14:48:58.676539 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:48:58.676552 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-11 14:48:58.676560 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:48:58.676575 | orchestrator | 2025-06-11 14:48:58.676583 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-11 14:48:58.676590 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:01.739) 0:00:10.559 ******** 2025-06-11 14:48:58.676598 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:48:58.676606 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:48:58.676614 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:48:58.676623 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:48:58.676632 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:48:58.676640 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:48:58.676649 | orchestrator | 2025-06-11 14:48:58.676704 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-11 14:48:58.676715 | orchestrator | Wednesday 11 June 2025 14:48:06 +0000 (0:00:00.826) 0:00:11.386 ******** 2025-06-11 14:48:58.676740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676883 | orchestrator | 2025-06-11 14:48:58.676892 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-11 14:48:58.676900 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:01.477) 0:00:12.863 ******** 2025-06-11 14:48:58.676908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.676989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677038 | orchestrator | 2025-06-11 14:48:58.677046 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-11 14:48:58.677054 | orchestrator | Wednesday 11 June 2025 14:48:11 +0000 (0:00:03.877) 0:00:16.741 ******** 2025-06-11 14:48:58.677062 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:48:58.677070 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:48:58.677077 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:48:58.677085 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:48:58.677093 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:48:58.677100 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:48:58.677108 | orchestrator | 2025-06-11 14:48:58.677116 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-11 14:48:58.677124 | orchestrator | Wednesday 11 June 2025 14:48:13 +0000 (0:00:01.651) 0:00:18.392 ******** 2025-06-11 14:48:58.677132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-11 14:48:58.677260 | orchestrator | 2025-06-11 14:48:58.677267 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677275 | orchestrator | Wednesday 11 June 2025 14:48:15 +0000 (0:00:02.334) 0:00:20.726 ******** 2025-06-11 14:48:58.677283 | orchestrator | 2025-06-11 14:48:58.677291 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677321 | orchestrator | Wednesday 11 June 2025 14:48:15 +0000 (0:00:00.135) 0:00:20.862 ******** 2025-06-11 14:48:58.677330 | orchestrator | 2025-06-11 14:48:58.677338 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677346 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.271) 0:00:21.133 ******** 2025-06-11 14:48:58.677353 | orchestrator | 2025-06-11 14:48:58.677361 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677369 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.212) 0:00:21.345 ******** 2025-06-11 14:48:58.677376 | orchestrator | 2025-06-11 14:48:58.677384 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677392 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.315) 0:00:21.661 ******** 2025-06-11 14:48:58.677400 | orchestrator | 2025-06-11 14:48:58.677407 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-11 14:48:58.677415 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.309) 0:00:21.970 ******** 2025-06-11 14:48:58.677423 | orchestrator | 2025-06-11 14:48:58.677430 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-11 14:48:58.677438 | orchestrator | Wednesday 11 June 2025 14:48:17 +0000 (0:00:00.662) 0:00:22.633 ******** 2025-06-11 14:48:58.677446 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:58.677454 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:48:58.677461 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:58.677469 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:48:58.677477 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:58.677484 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:48:58.677492 | orchestrator | 2025-06-11 14:48:58.677500 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-11 14:48:58.677507 | orchestrator | Wednesday 11 June 2025 14:48:24 +0000 (0:00:07.354) 0:00:29.987 ******** 2025-06-11 14:48:58.677515 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:48:58.677523 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:48:58.677531 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:48:58.677538 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:48:58.677546 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:48:58.677553 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:48:58.677561 | orchestrator | 2025-06-11 14:48:58.677569 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-11 14:48:58.677577 | orchestrator | Wednesday 11 June 2025 14:48:26 +0000 (0:00:01.632) 0:00:31.620 ******** 2025-06-11 14:48:58.677585 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:58.677592 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:58.677600 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:48:58.677608 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:48:58.677618 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:48:58.677627 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:58.677634 | orchestrator | 2025-06-11 14:48:58.677642 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-11 14:48:58.677650 | orchestrator | Wednesday 11 June 2025 14:48:35 +0000 (0:00:09.151) 0:00:40.772 ******** 2025-06-11 14:48:58.677680 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-11 14:48:58.677689 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-11 14:48:58.677697 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-11 14:48:58.677705 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-11 14:48:58.677713 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-11 14:48:58.677730 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-11 14:48:58.677739 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-11 14:48:58.677747 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-11 14:48:58.677755 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-11 14:48:58.677763 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-11 14:48:58.677770 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-11 14:48:58.677778 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-11 14:48:58.677786 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677794 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677801 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677809 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677817 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677824 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-11 14:48:58.677832 | orchestrator | 2025-06-11 14:48:58.677840 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-11 14:48:58.677848 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:07.458) 0:00:48.230 ******** 2025-06-11 14:48:58.677856 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-11 14:48:58.677864 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:48:58.677871 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-11 14:48:58.677879 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:48:58.677887 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-11 14:48:58.677895 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:48:58.677902 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-11 14:48:58.677910 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-11 14:48:58.677918 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-11 14:48:58.677926 | orchestrator | 2025-06-11 14:48:58.677934 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-11 14:48:58.677941 | orchestrator | Wednesday 11 June 2025 14:48:45 +0000 (0:00:02.459) 0:00:50.690 ******** 2025-06-11 14:48:58.677949 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-11 14:48:58.677957 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:48:58.677964 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-11 14:48:58.677972 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:48:58.677980 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-11 14:48:58.677988 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:48:58.677996 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-11 14:48:58.678003 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-11 14:48:58.678011 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-11 14:48:58.678060 | orchestrator | 2025-06-11 14:48:58.678068 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-11 14:48:58.678076 | orchestrator | Wednesday 11 June 2025 14:48:49 +0000 (0:00:03.758) 0:00:54.448 ******** 2025-06-11 14:48:58.678089 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:48:58.678097 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:48:58.678105 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:48:58.678112 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:48:58.678120 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:48:58.678128 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:48:58.678135 | orchestrator | 2025-06-11 14:48:58.678147 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:48:58.678155 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:48:58.678163 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:48:58.678171 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:48:58.678179 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:48:58.678186 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:48:58.678199 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:48:58.678208 | orchestrator | 2025-06-11 14:48:58.678216 | orchestrator | 2025-06-11 14:48:58.678224 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:48:58.678231 | orchestrator | Wednesday 11 June 2025 14:48:57 +0000 (0:00:08.337) 0:01:02.785 ******** 2025-06-11 14:48:58.678239 | orchestrator | =============================================================================== 2025-06-11 14:48:58.678247 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.49s 2025-06-11 14:48:58.678255 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.46s 2025-06-11 14:48:58.678262 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.35s 2025-06-11 14:48:58.678270 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.88s 2025-06-11 14:48:58.678278 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.76s 2025-06-11 14:48:58.678285 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.46s 2025-06-11 14:48:58.678293 | orchestrator | module-load : Load modules ---------------------------------------------- 2.42s 2025-06-11 14:48:58.678301 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.33s 2025-06-11 14:48:58.678309 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.19s 2025-06-11 14:48:58.678316 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.91s 2025-06-11 14:48:58.678324 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.74s 2025-06-11 14:48:58.678332 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.74s 2025-06-11 14:48:58.678339 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.65s 2025-06-11 14:48:58.678347 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.63s 2025-06-11 14:48:58.678355 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.48s 2025-06-11 14:48:58.678363 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.25s 2025-06-11 14:48:58.678370 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-06-11 14:48:58.678378 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.83s 2025-06-11 14:48:58.678393 | orchestrator | 2025-06-11 14:48:58 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:48:58.678401 | orchestrator | 2025-06-11 14:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:01.720336 | orchestrator | 2025-06-11 14:49:01 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:01.724191 | orchestrator | 2025-06-11 14:49:01 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:01.725084 | orchestrator | 2025-06-11 14:49:01 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:01.727132 | orchestrator | 2025-06-11 14:49:01 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:01.727862 | orchestrator | 2025-06-11 14:49:01 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:01.727882 | orchestrator | 2025-06-11 14:49:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:04.759506 | orchestrator | 2025-06-11 14:49:04 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:04.759744 | orchestrator | 2025-06-11 14:49:04 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:04.760728 | orchestrator | 2025-06-11 14:49:04 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:04.763372 | orchestrator | 2025-06-11 14:49:04 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:04.763471 | orchestrator | 2025-06-11 14:49:04 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:04.763487 | orchestrator | 2025-06-11 14:49:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:07.790880 | orchestrator | 2025-06-11 14:49:07 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:07.791375 | orchestrator | 2025-06-11 14:49:07 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:07.792369 | orchestrator | 2025-06-11 14:49:07 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:07.792809 | orchestrator | 2025-06-11 14:49:07 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:07.793637 | orchestrator | 2025-06-11 14:49:07 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:07.793909 | orchestrator | 2025-06-11 14:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:10.827397 | orchestrator | 2025-06-11 14:49:10 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:10.830522 | orchestrator | 2025-06-11 14:49:10 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:10.830879 | orchestrator | 2025-06-11 14:49:10 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:10.831506 | orchestrator | 2025-06-11 14:49:10 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:10.832223 | orchestrator | 2025-06-11 14:49:10 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:10.832317 | orchestrator | 2025-06-11 14:49:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:13.886982 | orchestrator | 2025-06-11 14:49:13 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:13.887188 | orchestrator | 2025-06-11 14:49:13 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:13.887810 | orchestrator | 2025-06-11 14:49:13 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:13.888795 | orchestrator | 2025-06-11 14:49:13 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:13.889935 | orchestrator | 2025-06-11 14:49:13 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:13.889961 | orchestrator | 2025-06-11 14:49:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:16.914353 | orchestrator | 2025-06-11 14:49:16 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:16.914780 | orchestrator | 2025-06-11 14:49:16 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:16.915341 | orchestrator | 2025-06-11 14:49:16 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:16.915943 | orchestrator | 2025-06-11 14:49:16 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:16.917199 | orchestrator | 2025-06-11 14:49:16 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:16.918097 | orchestrator | 2025-06-11 14:49:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:19.952004 | orchestrator | 2025-06-11 14:49:19 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:19.952355 | orchestrator | 2025-06-11 14:49:19 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:19.953023 | orchestrator | 2025-06-11 14:49:19 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:19.953704 | orchestrator | 2025-06-11 14:49:19 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:19.954459 | orchestrator | 2025-06-11 14:49:19 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:19.954484 | orchestrator | 2025-06-11 14:49:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:22.988164 | orchestrator | 2025-06-11 14:49:22 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state STARTED 2025-06-11 14:49:22.988879 | orchestrator | 2025-06-11 14:49:22 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:22.989948 | orchestrator | 2025-06-11 14:49:22 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:22.991630 | orchestrator | 2025-06-11 14:49:22 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:22.992257 | orchestrator | 2025-06-11 14:49:22 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:22.992279 | orchestrator | 2025-06-11 14:49:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:26.033819 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task f4607c0f-5088-4de0-b391-9eade196aa4a is in state SUCCESS 2025-06-11 14:49:26.034602 | orchestrator | 2025-06-11 14:49:26.034671 | orchestrator | 2025-06-11 14:49:26.034693 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-11 14:49:26.034745 | orchestrator | 2025-06-11 14:49:26.034759 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-11 14:49:26.034770 | orchestrator | Wednesday 11 June 2025 14:45:13 +0000 (0:00:00.203) 0:00:00.203 ******** 2025-06-11 14:49:26.034781 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.034793 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.034804 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.034814 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.034831 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.034849 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.034861 | orchestrator | 2025-06-11 14:49:26.034897 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-11 14:49:26.034909 | orchestrator | Wednesday 11 June 2025 14:45:14 +0000 (0:00:00.792) 0:00:00.996 ******** 2025-06-11 14:49:26.034920 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.034931 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.034942 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.034952 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.034963 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.034973 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.034984 | orchestrator | 2025-06-11 14:49:26.034994 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-11 14:49:26.035005 | orchestrator | Wednesday 11 June 2025 14:45:14 +0000 (0:00:00.788) 0:00:01.784 ******** 2025-06-11 14:49:26.035016 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.035026 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.035037 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.035047 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.035058 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.035068 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.035082 | orchestrator | 2025-06-11 14:49:26.035100 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-11 14:49:26.035111 | orchestrator | Wednesday 11 June 2025 14:45:15 +0000 (0:00:00.922) 0:00:02.707 ******** 2025-06-11 14:49:26.035121 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.035132 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.035143 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.035153 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.035164 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.035174 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.035185 | orchestrator | 2025-06-11 14:49:26.035197 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-11 14:49:26.035209 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:03.002) 0:00:05.709 ******** 2025-06-11 14:49:26.035221 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.035234 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.035246 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.035258 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.035270 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.035282 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.035294 | orchestrator | 2025-06-11 14:49:26.035306 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-11 14:49:26.035318 | orchestrator | Wednesday 11 June 2025 14:45:20 +0000 (0:00:01.906) 0:00:07.616 ******** 2025-06-11 14:49:26.035331 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.035343 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.035354 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.035367 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.035379 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.035391 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.035404 | orchestrator | 2025-06-11 14:49:26.035416 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-11 14:49:26.035428 | orchestrator | Wednesday 11 June 2025 14:45:22 +0000 (0:00:01.436) 0:00:09.052 ******** 2025-06-11 14:49:26.035440 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.035452 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.035464 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.035476 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.035488 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.035500 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.035512 | orchestrator | 2025-06-11 14:49:26.035525 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-11 14:49:26.035537 | orchestrator | Wednesday 11 June 2025 14:45:22 +0000 (0:00:00.856) 0:00:09.909 ******** 2025-06-11 14:49:26.035557 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.035568 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.035578 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.035589 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.035599 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.035610 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.035620 | orchestrator | 2025-06-11 14:49:26.035631 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-11 14:49:26.035661 | orchestrator | Wednesday 11 June 2025 14:45:23 +0000 (0:00:00.547) 0:00:10.457 ******** 2025-06-11 14:49:26.035673 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035683 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035694 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.035705 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035728 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035739 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.035750 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035761 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035772 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.035783 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035805 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035817 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.035828 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035838 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035849 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.035860 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 14:49:26.035870 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 14:49:26.035881 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.035891 | orchestrator | 2025-06-11 14:49:26.035902 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-11 14:49:26.035912 | orchestrator | Wednesday 11 June 2025 14:45:24 +0000 (0:00:00.862) 0:00:11.320 ******** 2025-06-11 14:49:26.035923 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.035934 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.035944 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.035955 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.035966 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.035976 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.035987 | orchestrator | 2025-06-11 14:49:26.035997 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-11 14:49:26.036009 | orchestrator | Wednesday 11 June 2025 14:45:25 +0000 (0:00:01.305) 0:00:12.626 ******** 2025-06-11 14:49:26.036020 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.036030 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.036041 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.036051 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.036062 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.036072 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.036083 | orchestrator | 2025-06-11 14:49:26.036093 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-11 14:49:26.036104 | orchestrator | Wednesday 11 June 2025 14:45:26 +0000 (0:00:00.850) 0:00:13.477 ******** 2025-06-11 14:49:26.036115 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.036126 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.036143 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.036153 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.036164 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.036174 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.036184 | orchestrator | 2025-06-11 14:49:26.036195 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-11 14:49:26.036205 | orchestrator | Wednesday 11 June 2025 14:45:33 +0000 (0:00:06.469) 0:00:19.947 ******** 2025-06-11 14:49:26.036216 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.036226 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.036236 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.036247 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.036257 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.036268 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.036278 | orchestrator | 2025-06-11 14:49:26.036289 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-11 14:49:26.036299 | orchestrator | Wednesday 11 June 2025 14:45:34 +0000 (0:00:01.110) 0:00:21.057 ******** 2025-06-11 14:49:26.036310 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.036320 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.036331 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.036341 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.036351 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.036362 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.036372 | orchestrator | 2025-06-11 14:49:26.036383 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-11 14:49:26.036395 | orchestrator | Wednesday 11 June 2025 14:45:35 +0000 (0:00:01.512) 0:00:22.570 ******** 2025-06-11 14:49:26.036405 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.036416 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.036426 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.036436 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.036447 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.036457 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.036467 | orchestrator | 2025-06-11 14:49:26.036478 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-11 14:49:26.036489 | orchestrator | Wednesday 11 June 2025 14:45:36 +0000 (0:00:00.794) 0:00:23.364 ******** 2025-06-11 14:49:26.036499 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-11 14:49:26.036510 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-11 14:49:26.036521 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.036531 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-11 14:49:26.036542 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-11 14:49:26.036552 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.036562 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-11 14:49:26.036573 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-11 14:49:26.036583 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.036594 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-11 14:49:26.036604 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-11 14:49:26.036614 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.036629 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-11 14:49:26.036654 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-11 14:49:26.036665 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.036675 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-11 14:49:26.036686 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-11 14:49:26.036696 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.036707 | orchestrator | 2025-06-11 14:49:26.036717 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-11 14:49:26.036740 | orchestrator | Wednesday 11 June 2025 14:45:37 +0000 (0:00:01.156) 0:00:24.521 ******** 2025-06-11 14:49:26.036751 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.036762 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.036773 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.036783 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.036793 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.036804 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.036815 | orchestrator | 2025-06-11 14:49:26.036825 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-11 14:49:26.036836 | orchestrator | 2025-06-11 14:49:26.036846 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-11 14:49:26.036857 | orchestrator | Wednesday 11 June 2025 14:45:38 +0000 (0:00:01.388) 0:00:25.910 ******** 2025-06-11 14:49:26.036867 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.036878 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.036888 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.036899 | orchestrator | 2025-06-11 14:49:26.036909 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-11 14:49:26.036920 | orchestrator | Wednesday 11 June 2025 14:45:39 +0000 (0:00:00.819) 0:00:26.729 ******** 2025-06-11 14:49:26.036930 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.036941 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.036952 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.036962 | orchestrator | 2025-06-11 14:49:26.036973 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-11 14:49:26.036983 | orchestrator | Wednesday 11 June 2025 14:45:40 +0000 (0:00:01.127) 0:00:27.856 ******** 2025-06-11 14:49:26.036994 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.037004 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.037014 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.037025 | orchestrator | 2025-06-11 14:49:26.037036 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-11 14:49:26.037046 | orchestrator | Wednesday 11 June 2025 14:45:42 +0000 (0:00:01.250) 0:00:29.107 ******** 2025-06-11 14:49:26.037057 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.037067 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.037078 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.037088 | orchestrator | 2025-06-11 14:49:26.037098 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-11 14:49:26.037109 | orchestrator | Wednesday 11 June 2025 14:45:43 +0000 (0:00:00.861) 0:00:29.968 ******** 2025-06-11 14:49:26.037120 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.037130 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037141 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037151 | orchestrator | 2025-06-11 14:49:26.037161 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-11 14:49:26.037172 | orchestrator | Wednesday 11 June 2025 14:45:43 +0000 (0:00:00.371) 0:00:30.340 ******** 2025-06-11 14:49:26.037182 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:49:26.037193 | orchestrator | 2025-06-11 14:49:26.037204 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-11 14:49:26.037214 | orchestrator | Wednesday 11 June 2025 14:45:44 +0000 (0:00:00.727) 0:00:31.068 ******** 2025-06-11 14:49:26.037225 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.037235 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.037246 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.037256 | orchestrator | 2025-06-11 14:49:26.037267 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-11 14:49:26.037277 | orchestrator | Wednesday 11 June 2025 14:45:46 +0000 (0:00:01.910) 0:00:32.978 ******** 2025-06-11 14:49:26.037288 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037298 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037314 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.037325 | orchestrator | 2025-06-11 14:49:26.037336 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-11 14:49:26.037346 | orchestrator | Wednesday 11 June 2025 14:45:46 +0000 (0:00:00.757) 0:00:33.736 ******** 2025-06-11 14:49:26.037357 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037367 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037378 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.037388 | orchestrator | 2025-06-11 14:49:26.037398 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-11 14:49:26.037409 | orchestrator | Wednesday 11 June 2025 14:45:47 +0000 (0:00:00.993) 0:00:34.729 ******** 2025-06-11 14:49:26.037420 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037430 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037440 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.037451 | orchestrator | 2025-06-11 14:49:26.037461 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-11 14:49:26.037472 | orchestrator | Wednesday 11 June 2025 14:45:50 +0000 (0:00:02.253) 0:00:36.982 ******** 2025-06-11 14:49:26.037482 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.037493 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037503 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037514 | orchestrator | 2025-06-11 14:49:26.037524 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-11 14:49:26.037534 | orchestrator | Wednesday 11 June 2025 14:45:50 +0000 (0:00:00.634) 0:00:37.617 ******** 2025-06-11 14:49:26.037545 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.037556 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037566 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.037576 | orchestrator | 2025-06-11 14:49:26.037587 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-11 14:49:26.037602 | orchestrator | Wednesday 11 June 2025 14:45:51 +0000 (0:00:00.582) 0:00:38.199 ******** 2025-06-11 14:49:26.037613 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.037623 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.037685 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.037698 | orchestrator | 2025-06-11 14:49:26.037709 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-11 14:49:26.037720 | orchestrator | Wednesday 11 June 2025 14:45:52 +0000 (0:00:01.666) 0:00:39.866 ******** 2025-06-11 14:49:26.037737 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-11 14:49:26.037749 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-11 14:49:26.037760 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-11 14:49:26.037771 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-11 14:49:26.037782 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-11 14:49:26.037793 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-11 14:49:26.037803 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-11 14:49:26.037814 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-11 14:49:26.037825 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-11 14:49:26.037842 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-11 14:49:26.037853 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-11 14:49:26.037863 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-11 14:49:26.037874 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-11 14:49:26.037885 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-11 14:49:26.037895 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-11 14:49:26.037906 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.037917 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.037927 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.037938 | orchestrator | 2025-06-11 14:49:26.037949 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-11 14:49:26.037959 | orchestrator | Wednesday 11 June 2025 14:46:48 +0000 (0:00:55.955) 0:01:35.821 ******** 2025-06-11 14:49:26.037970 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.037981 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.037991 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.038002 | orchestrator | 2025-06-11 14:49:26.038053 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-11 14:49:26.038066 | orchestrator | Wednesday 11 June 2025 14:46:49 +0000 (0:00:00.260) 0:01:36.082 ******** 2025-06-11 14:49:26.038076 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038085 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038094 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038103 | orchestrator | 2025-06-11 14:49:26.038113 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-11 14:49:26.038122 | orchestrator | Wednesday 11 June 2025 14:46:50 +0000 (0:00:00.961) 0:01:37.043 ******** 2025-06-11 14:49:26.038131 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038141 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038150 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038159 | orchestrator | 2025-06-11 14:49:26.038168 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-11 14:49:26.038178 | orchestrator | Wednesday 11 June 2025 14:46:51 +0000 (0:00:01.168) 0:01:38.212 ******** 2025-06-11 14:49:26.038187 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038196 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038206 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038215 | orchestrator | 2025-06-11 14:49:26.038224 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-11 14:49:26.038234 | orchestrator | Wednesday 11 June 2025 14:47:06 +0000 (0:00:15.390) 0:01:53.602 ******** 2025-06-11 14:49:26.038243 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038252 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.038262 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.038271 | orchestrator | 2025-06-11 14:49:26.038280 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-11 14:49:26.038300 | orchestrator | Wednesday 11 June 2025 14:47:07 +0000 (0:00:00.735) 0:01:54.337 ******** 2025-06-11 14:49:26.038310 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038319 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.038328 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.038338 | orchestrator | 2025-06-11 14:49:26.038347 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-11 14:49:26.038362 | orchestrator | Wednesday 11 June 2025 14:47:08 +0000 (0:00:00.664) 0:01:55.002 ******** 2025-06-11 14:49:26.038372 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038382 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038391 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038400 | orchestrator | 2025-06-11 14:49:26.038415 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-11 14:49:26.038425 | orchestrator | Wednesday 11 June 2025 14:47:08 +0000 (0:00:00.635) 0:01:55.637 ******** 2025-06-11 14:49:26.038434 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038443 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.038453 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.038462 | orchestrator | 2025-06-11 14:49:26.038471 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-11 14:49:26.038481 | orchestrator | Wednesday 11 June 2025 14:47:09 +0000 (0:00:01.077) 0:01:56.715 ******** 2025-06-11 14:49:26.038490 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038499 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.038509 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.038518 | orchestrator | 2025-06-11 14:49:26.038527 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-11 14:49:26.038537 | orchestrator | Wednesday 11 June 2025 14:47:10 +0000 (0:00:00.287) 0:01:57.003 ******** 2025-06-11 14:49:26.038546 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038555 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038565 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038574 | orchestrator | 2025-06-11 14:49:26.038583 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-11 14:49:26.038593 | orchestrator | Wednesday 11 June 2025 14:47:10 +0000 (0:00:00.642) 0:01:57.646 ******** 2025-06-11 14:49:26.038602 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038612 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038621 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038630 | orchestrator | 2025-06-11 14:49:26.038653 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-11 14:49:26.038663 | orchestrator | Wednesday 11 June 2025 14:47:11 +0000 (0:00:00.713) 0:01:58.360 ******** 2025-06-11 14:49:26.038672 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038681 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038690 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038700 | orchestrator | 2025-06-11 14:49:26.038709 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-11 14:49:26.038718 | orchestrator | Wednesday 11 June 2025 14:47:12 +0000 (0:00:01.192) 0:01:59.552 ******** 2025-06-11 14:49:26.038728 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:49:26.038737 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:49:26.038746 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:49:26.038756 | orchestrator | 2025-06-11 14:49:26.038765 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-11 14:49:26.038774 | orchestrator | Wednesday 11 June 2025 14:47:13 +0000 (0:00:00.822) 0:02:00.374 ******** 2025-06-11 14:49:26.038784 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.038793 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.038802 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.038812 | orchestrator | 2025-06-11 14:49:26.038821 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-11 14:49:26.038830 | orchestrator | Wednesday 11 June 2025 14:47:13 +0000 (0:00:00.288) 0:02:00.662 ******** 2025-06-11 14:49:26.038840 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.038849 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.038858 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.038868 | orchestrator | 2025-06-11 14:49:26.038877 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-11 14:49:26.038886 | orchestrator | Wednesday 11 June 2025 14:47:14 +0000 (0:00:00.330) 0:02:00.993 ******** 2025-06-11 14:49:26.038901 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038939 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.038950 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.038959 | orchestrator | 2025-06-11 14:49:26.038969 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-11 14:49:26.038978 | orchestrator | Wednesday 11 June 2025 14:47:14 +0000 (0:00:00.876) 0:02:01.869 ******** 2025-06-11 14:49:26.038988 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.038997 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.039006 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.039016 | orchestrator | 2025-06-11 14:49:26.039025 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-11 14:49:26.039035 | orchestrator | Wednesday 11 June 2025 14:47:15 +0000 (0:00:00.636) 0:02:02.505 ******** 2025-06-11 14:49:26.039045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-11 14:49:26.039054 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-11 14:49:26.039063 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-11 14:49:26.039072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-11 14:49:26.039082 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-11 14:49:26.039091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-11 14:49:26.039101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-11 14:49:26.039110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-11 14:49:26.039120 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-11 14:49:26.039129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-11 14:49:26.039139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-11 14:49:26.039148 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-11 14:49:26.039164 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-11 14:49:26.039174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-11 14:49:26.039183 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-11 14:49:26.039193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-11 14:49:26.039202 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-11 14:49:26.039211 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-11 14:49:26.039221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-11 14:49:26.039230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-11 14:49:26.039240 | orchestrator | 2025-06-11 14:49:26.039249 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-11 14:49:26.039258 | orchestrator | 2025-06-11 14:49:26.039268 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-11 14:49:26.039277 | orchestrator | Wednesday 11 June 2025 14:47:19 +0000 (0:00:03.495) 0:02:06.001 ******** 2025-06-11 14:49:26.039286 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.039296 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.039305 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.039315 | orchestrator | 2025-06-11 14:49:26.039330 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-11 14:49:26.039339 | orchestrator | Wednesday 11 June 2025 14:47:19 +0000 (0:00:00.747) 0:02:06.748 ******** 2025-06-11 14:49:26.039349 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.039358 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.039367 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.039377 | orchestrator | 2025-06-11 14:49:26.039386 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-11 14:49:26.039396 | orchestrator | Wednesday 11 June 2025 14:47:20 +0000 (0:00:00.608) 0:02:07.357 ******** 2025-06-11 14:49:26.039405 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.039414 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.039423 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.039433 | orchestrator | 2025-06-11 14:49:26.039442 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-11 14:49:26.039452 | orchestrator | Wednesday 11 June 2025 14:47:20 +0000 (0:00:00.323) 0:02:07.680 ******** 2025-06-11 14:49:26.039461 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:49:26.039471 | orchestrator | 2025-06-11 14:49:26.039480 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-11 14:49:26.039490 | orchestrator | Wednesday 11 June 2025 14:47:21 +0000 (0:00:00.785) 0:02:08.466 ******** 2025-06-11 14:49:26.039499 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.039508 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.039518 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.039527 | orchestrator | 2025-06-11 14:49:26.039536 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-11 14:49:26.039546 | orchestrator | Wednesday 11 June 2025 14:47:21 +0000 (0:00:00.317) 0:02:08.785 ******** 2025-06-11 14:49:26.039555 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.039565 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.040138 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.040154 | orchestrator | 2025-06-11 14:49:26.040163 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-11 14:49:26.040173 | orchestrator | Wednesday 11 June 2025 14:47:22 +0000 (0:00:00.317) 0:02:09.102 ******** 2025-06-11 14:49:26.040183 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.040192 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.040201 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.040210 | orchestrator | 2025-06-11 14:49:26.040220 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-11 14:49:26.040229 | orchestrator | Wednesday 11 June 2025 14:47:22 +0000 (0:00:00.310) 0:02:09.413 ******** 2025-06-11 14:49:26.040239 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.040248 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.040257 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.040267 | orchestrator | 2025-06-11 14:49:26.040276 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-11 14:49:26.040285 | orchestrator | Wednesday 11 June 2025 14:47:23 +0000 (0:00:01.329) 0:02:10.743 ******** 2025-06-11 14:49:26.040295 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:49:26.040304 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:49:26.040313 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:49:26.040322 | orchestrator | 2025-06-11 14:49:26.040332 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-11 14:49:26.040341 | orchestrator | 2025-06-11 14:49:26.040350 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-11 14:49:26.040360 | orchestrator | Wednesday 11 June 2025 14:47:32 +0000 (0:00:08.404) 0:02:19.147 ******** 2025-06-11 14:49:26.040369 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.040378 | orchestrator | 2025-06-11 14:49:26.040388 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-11 14:49:26.040404 | orchestrator | Wednesday 11 June 2025 14:47:33 +0000 (0:00:00.833) 0:02:19.981 ******** 2025-06-11 14:49:26.040413 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040423 | orchestrator | 2025-06-11 14:49:26.040432 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-11 14:49:26.040441 | orchestrator | Wednesday 11 June 2025 14:47:33 +0000 (0:00:00.404) 0:02:20.385 ******** 2025-06-11 14:49:26.040450 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-11 14:49:26.040460 | orchestrator | 2025-06-11 14:49:26.040476 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-11 14:49:26.040486 | orchestrator | Wednesday 11 June 2025 14:47:34 +0000 (0:00:00.978) 0:02:21.363 ******** 2025-06-11 14:49:26.040495 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040505 | orchestrator | 2025-06-11 14:49:26.040514 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-11 14:49:26.040523 | orchestrator | Wednesday 11 June 2025 14:47:35 +0000 (0:00:00.880) 0:02:22.244 ******** 2025-06-11 14:49:26.040532 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040542 | orchestrator | 2025-06-11 14:49:26.040551 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-11 14:49:26.040560 | orchestrator | Wednesday 11 June 2025 14:47:35 +0000 (0:00:00.562) 0:02:22.806 ******** 2025-06-11 14:49:26.040570 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-11 14:49:26.040579 | orchestrator | 2025-06-11 14:49:26.040589 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-11 14:49:26.040598 | orchestrator | Wednesday 11 June 2025 14:47:37 +0000 (0:00:01.565) 0:02:24.372 ******** 2025-06-11 14:49:26.040607 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-11 14:49:26.040616 | orchestrator | 2025-06-11 14:49:26.040626 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-11 14:49:26.040651 | orchestrator | Wednesday 11 June 2025 14:47:38 +0000 (0:00:00.845) 0:02:25.217 ******** 2025-06-11 14:49:26.040661 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040671 | orchestrator | 2025-06-11 14:49:26.040680 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-11 14:49:26.040689 | orchestrator | Wednesday 11 June 2025 14:47:38 +0000 (0:00:00.436) 0:02:25.654 ******** 2025-06-11 14:49:26.040699 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040708 | orchestrator | 2025-06-11 14:49:26.040717 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-11 14:49:26.040727 | orchestrator | 2025-06-11 14:49:26.040736 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-11 14:49:26.040745 | orchestrator | Wednesday 11 June 2025 14:47:39 +0000 (0:00:00.453) 0:02:26.107 ******** 2025-06-11 14:49:26.040754 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.040764 | orchestrator | 2025-06-11 14:49:26.040773 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-11 14:49:26.040782 | orchestrator | Wednesday 11 June 2025 14:47:39 +0000 (0:00:00.148) 0:02:26.256 ******** 2025-06-11 14:49:26.040792 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-11 14:49:26.040817 | orchestrator | 2025-06-11 14:49:26.040827 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-11 14:49:26.040837 | orchestrator | Wednesday 11 June 2025 14:47:39 +0000 (0:00:00.434) 0:02:26.691 ******** 2025-06-11 14:49:26.040846 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.040855 | orchestrator | 2025-06-11 14:49:26.040865 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-11 14:49:26.040874 | orchestrator | Wednesday 11 June 2025 14:47:40 +0000 (0:00:00.996) 0:02:27.687 ******** 2025-06-11 14:49:26.040883 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.040892 | orchestrator | 2025-06-11 14:49:26.040902 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-11 14:49:26.040916 | orchestrator | Wednesday 11 June 2025 14:47:42 +0000 (0:00:01.839) 0:02:29.526 ******** 2025-06-11 14:49:26.040931 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.040940 | orchestrator | 2025-06-11 14:49:26.040950 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-11 14:49:26.040959 | orchestrator | Wednesday 11 June 2025 14:47:43 +0000 (0:00:00.777) 0:02:30.304 ******** 2025-06-11 14:49:26.040968 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.040978 | orchestrator | 2025-06-11 14:49:26.040987 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-11 14:49:26.040996 | orchestrator | Wednesday 11 June 2025 14:47:43 +0000 (0:00:00.418) 0:02:30.722 ******** 2025-06-11 14:49:26.041005 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.041014 | orchestrator | 2025-06-11 14:49:26.041024 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-11 14:49:26.041033 | orchestrator | Wednesday 11 June 2025 14:47:51 +0000 (0:00:07.385) 0:02:38.108 ******** 2025-06-11 14:49:26.041042 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.041052 | orchestrator | 2025-06-11 14:49:26.041061 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-11 14:49:26.041070 | orchestrator | Wednesday 11 June 2025 14:48:01 +0000 (0:00:10.715) 0:02:48.823 ******** 2025-06-11 14:49:26.041079 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.041089 | orchestrator | 2025-06-11 14:49:26.041098 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-11 14:49:26.041107 | orchestrator | 2025-06-11 14:49:26.041117 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-11 14:49:26.041126 | orchestrator | Wednesday 11 June 2025 14:48:02 +0000 (0:00:00.476) 0:02:49.299 ******** 2025-06-11 14:49:26.041136 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.041145 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.041154 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.041163 | orchestrator | 2025-06-11 14:49:26.041173 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-11 14:49:26.041182 | orchestrator | Wednesday 11 June 2025 14:48:02 +0000 (0:00:00.454) 0:02:49.754 ******** 2025-06-11 14:49:26.041191 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041201 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.041210 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.041219 | orchestrator | 2025-06-11 14:49:26.041228 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-11 14:49:26.041238 | orchestrator | Wednesday 11 June 2025 14:48:03 +0000 (0:00:00.284) 0:02:50.038 ******** 2025-06-11 14:49:26.041247 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:49:26.041256 | orchestrator | 2025-06-11 14:49:26.041266 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-11 14:49:26.041281 | orchestrator | Wednesday 11 June 2025 14:48:03 +0000 (0:00:00.522) 0:02:50.561 ******** 2025-06-11 14:49:26.041291 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041300 | orchestrator | 2025-06-11 14:49:26.041309 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-11 14:49:26.041319 | orchestrator | Wednesday 11 June 2025 14:48:04 +0000 (0:00:01.157) 0:02:51.718 ******** 2025-06-11 14:49:26.041328 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041337 | orchestrator | 2025-06-11 14:49:26.041347 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-11 14:49:26.041356 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:00.736) 0:02:52.455 ******** 2025-06-11 14:49:26.041365 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041374 | orchestrator | 2025-06-11 14:49:26.041384 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-11 14:49:26.041393 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:00.154) 0:02:52.609 ******** 2025-06-11 14:49:26.041402 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041417 | orchestrator | 2025-06-11 14:49:26.041426 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-11 14:49:26.041436 | orchestrator | Wednesday 11 June 2025 14:48:06 +0000 (0:00:00.943) 0:02:53.553 ******** 2025-06-11 14:49:26.041445 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041455 | orchestrator | 2025-06-11 14:49:26.041464 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-11 14:49:26.041473 | orchestrator | Wednesday 11 June 2025 14:48:06 +0000 (0:00:00.232) 0:02:53.785 ******** 2025-06-11 14:49:26.041493 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041502 | orchestrator | 2025-06-11 14:49:26.041512 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-11 14:49:26.041521 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.178) 0:02:53.963 ******** 2025-06-11 14:49:26.041530 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041540 | orchestrator | 2025-06-11 14:49:26.041549 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-11 14:49:26.041558 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.203) 0:02:54.167 ******** 2025-06-11 14:49:26.041567 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041577 | orchestrator | 2025-06-11 14:49:26.041586 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-11 14:49:26.041596 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:00.194) 0:02:54.361 ******** 2025-06-11 14:49:26.041605 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041614 | orchestrator | 2025-06-11 14:49:26.041624 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-11 14:49:26.041654 | orchestrator | Wednesday 11 June 2025 14:48:11 +0000 (0:00:04.178) 0:02:58.540 ******** 2025-06-11 14:49:26.041665 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-11 14:49:26.041674 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-11 14:49:26.041683 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-11 14:49:26.041697 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-11 14:49:26.041706 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-11 14:49:26.041716 | orchestrator | 2025-06-11 14:49:26.041725 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-11 14:49:26.041734 | orchestrator | Wednesday 11 June 2025 14:48:55 +0000 (0:00:44.313) 0:03:42.853 ******** 2025-06-11 14:49:26.041744 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041753 | orchestrator | 2025-06-11 14:49:26.041762 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-11 14:49:26.041772 | orchestrator | Wednesday 11 June 2025 14:48:57 +0000 (0:00:01.244) 0:03:44.098 ******** 2025-06-11 14:49:26.041781 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041790 | orchestrator | 2025-06-11 14:49:26.041799 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-11 14:49:26.041808 | orchestrator | Wednesday 11 June 2025 14:48:58 +0000 (0:00:01.510) 0:03:45.609 ******** 2025-06-11 14:49:26.041818 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-11 14:49:26.041827 | orchestrator | 2025-06-11 14:49:26.041837 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-11 14:49:26.041846 | orchestrator | Wednesday 11 June 2025 14:49:00 +0000 (0:00:01.582) 0:03:47.192 ******** 2025-06-11 14:49:26.041855 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041864 | orchestrator | 2025-06-11 14:49:26.041874 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-11 14:49:26.041883 | orchestrator | Wednesday 11 June 2025 14:49:00 +0000 (0:00:00.270) 0:03:47.462 ******** 2025-06-11 14:49:26.041892 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-11 14:49:26.041907 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-11 14:49:26.041917 | orchestrator | 2025-06-11 14:49:26.041926 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-11 14:49:26.041935 | orchestrator | Wednesday 11 June 2025 14:49:02 +0000 (0:00:02.275) 0:03:49.737 ******** 2025-06-11 14:49:26.041945 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.041954 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.041963 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.041973 | orchestrator | 2025-06-11 14:49:26.041982 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-11 14:49:26.041991 | orchestrator | Wednesday 11 June 2025 14:49:03 +0000 (0:00:00.316) 0:03:50.054 ******** 2025-06-11 14:49:26.042000 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.042010 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.042065 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.042084 | orchestrator | 2025-06-11 14:49:26.042101 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-11 14:49:26.042111 | orchestrator | 2025-06-11 14:49:26.042121 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-11 14:49:26.042130 | orchestrator | Wednesday 11 June 2025 14:49:03 +0000 (0:00:00.820) 0:03:50.874 ******** 2025-06-11 14:49:26.042139 | orchestrator | ok: [testbed-manager] 2025-06-11 14:49:26.042149 | orchestrator | 2025-06-11 14:49:26.042158 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-11 14:49:26.042168 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:00.248) 0:03:51.123 ******** 2025-06-11 14:49:26.042177 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-11 14:49:26.042186 | orchestrator | 2025-06-11 14:49:26.042196 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-11 14:49:26.042205 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:00.199) 0:03:51.323 ******** 2025-06-11 14:49:26.042215 | orchestrator | changed: [testbed-manager] 2025-06-11 14:49:26.042224 | orchestrator | 2025-06-11 14:49:26.042233 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-11 14:49:26.042243 | orchestrator | 2025-06-11 14:49:26.042266 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-11 14:49:26.042276 | orchestrator | Wednesday 11 June 2025 14:49:09 +0000 (0:00:05.146) 0:03:56.469 ******** 2025-06-11 14:49:26.042285 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:49:26.042295 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:49:26.042304 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:49:26.042313 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:49:26.042323 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:49:26.042332 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:49:26.042341 | orchestrator | 2025-06-11 14:49:26.042350 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-11 14:49:26.042360 | orchestrator | Wednesday 11 June 2025 14:49:10 +0000 (0:00:00.714) 0:03:57.184 ******** 2025-06-11 14:49:26.042369 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-11 14:49:26.042379 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-11 14:49:26.042388 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-11 14:49:26.042397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-11 14:49:26.042407 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-11 14:49:26.042416 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-11 14:49:26.042426 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-11 14:49:26.042435 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-11 14:49:26.042450 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-11 14:49:26.042464 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-11 14:49:26.042474 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-11 14:49:26.042483 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-11 14:49:26.042492 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-11 14:49:26.042501 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-11 14:49:26.042511 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-11 14:49:26.042520 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-11 14:49:26.042529 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-11 14:49:26.042539 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-11 14:49:26.042548 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-11 14:49:26.042557 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-11 14:49:26.042567 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-11 14:49:26.042576 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-11 14:49:26.042585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-11 14:49:26.042594 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-11 14:49:26.042603 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-11 14:49:26.042613 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-11 14:49:26.042622 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-11 14:49:26.042631 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-11 14:49:26.042655 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-11 14:49:26.042665 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-11 14:49:26.042674 | orchestrator | 2025-06-11 14:49:26.042689 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-11 14:49:26.042699 | orchestrator | Wednesday 11 June 2025 14:49:22 +0000 (0:00:12.112) 0:04:09.296 ******** 2025-06-11 14:49:26.042708 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.042718 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.042727 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.042737 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.042746 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.042755 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.042765 | orchestrator | 2025-06-11 14:49:26.042774 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-11 14:49:26.042784 | orchestrator | Wednesday 11 June 2025 14:49:22 +0000 (0:00:00.419) 0:04:09.715 ******** 2025-06-11 14:49:26.042793 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:49:26.042803 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:49:26.042812 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:49:26.042821 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:49:26.042831 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:49:26.042840 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:49:26.042849 | orchestrator | 2025-06-11 14:49:26.042859 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:49:26.042875 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:49:26.042886 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-11 14:49:26.042896 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-11 14:49:26.042905 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-11 14:49:26.042915 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-11 14:49:26.042924 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-11 14:49:26.042934 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-11 14:49:26.042943 | orchestrator | 2025-06-11 14:49:26.042952 | orchestrator | 2025-06-11 14:49:26.042962 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:49:26.042971 | orchestrator | Wednesday 11 June 2025 14:49:23 +0000 (0:00:00.540) 0:04:10.256 ******** 2025-06-11 14:49:26.042985 | orchestrator | =============================================================================== 2025-06-11 14:49:26.042995 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.96s 2025-06-11 14:49:26.043004 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.31s 2025-06-11 14:49:26.043014 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.39s 2025-06-11 14:49:26.043023 | orchestrator | Manage labels ---------------------------------------------------------- 12.11s 2025-06-11 14:49:26.043032 | orchestrator | kubectl : Install required packages ------------------------------------ 10.72s 2025-06-11 14:49:26.043042 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.40s 2025-06-11 14:49:26.043051 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.39s 2025-06-11 14:49:26.043060 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.47s 2025-06-11 14:49:26.043069 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.15s 2025-06-11 14:49:26.043079 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.18s 2025-06-11 14:49:26.043088 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.50s 2025-06-11 14:49:26.043098 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.00s 2025-06-11 14:49:26.043107 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.28s 2025-06-11 14:49:26.043116 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.25s 2025-06-11 14:49:26.043126 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.91s 2025-06-11 14:49:26.043135 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.91s 2025-06-11 14:49:26.043144 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.84s 2025-06-11 14:49:26.043154 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.67s 2025-06-11 14:49:26.043163 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.58s 2025-06-11 14:49:26.043172 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2025-06-11 14:49:26.043182 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task b2c01bd4-df4b-465a-bd66-d4c838ff0820 is in state STARTED 2025-06-11 14:49:26.043200 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:26.043214 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:26.043225 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:26.045623 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:26.046739 | orchestrator | 2025-06-11 14:49:26 | INFO  | Task 2d5fc765-9670-4008-8245-97f88b86e0ca is in state STARTED 2025-06-11 14:49:26.046780 | orchestrator | 2025-06-11 14:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:29.079066 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task b2c01bd4-df4b-465a-bd66-d4c838ff0820 is in state STARTED 2025-06-11 14:49:29.079458 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:29.080055 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:29.084018 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:29.084372 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:29.085522 | orchestrator | 2025-06-11 14:49:29 | INFO  | Task 2d5fc765-9670-4008-8245-97f88b86e0ca is in state STARTED 2025-06-11 14:49:29.085541 | orchestrator | 2025-06-11 14:49:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:32.144860 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task b2c01bd4-df4b-465a-bd66-d4c838ff0820 is in state STARTED 2025-06-11 14:49:32.146582 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:32.148189 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:32.150377 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:32.152423 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:32.154125 | orchestrator | 2025-06-11 14:49:32 | INFO  | Task 2d5fc765-9670-4008-8245-97f88b86e0ca is in state SUCCESS 2025-06-11 14:49:32.154165 | orchestrator | 2025-06-11 14:49:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:35.182803 | orchestrator | 2025-06-11 14:49:35 | INFO  | Task b2c01bd4-df4b-465a-bd66-d4c838ff0820 is in state SUCCESS 2025-06-11 14:49:35.185239 | orchestrator | 2025-06-11 14:49:35 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:35.187511 | orchestrator | 2025-06-11 14:49:35 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:35.192762 | orchestrator | 2025-06-11 14:49:35 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:35.197928 | orchestrator | 2025-06-11 14:49:35 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:35.198384 | orchestrator | 2025-06-11 14:49:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:38.252915 | orchestrator | 2025-06-11 14:49:38 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:38.254640 | orchestrator | 2025-06-11 14:49:38 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:38.257000 | orchestrator | 2025-06-11 14:49:38 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:38.258874 | orchestrator | 2025-06-11 14:49:38 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:38.259027 | orchestrator | 2025-06-11 14:49:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:41.305681 | orchestrator | 2025-06-11 14:49:41 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:41.310677 | orchestrator | 2025-06-11 14:49:41 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:41.312174 | orchestrator | 2025-06-11 14:49:41 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:41.313094 | orchestrator | 2025-06-11 14:49:41 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:41.313188 | orchestrator | 2025-06-11 14:49:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:44.365143 | orchestrator | 2025-06-11 14:49:44 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:44.366470 | orchestrator | 2025-06-11 14:49:44 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:44.368600 | orchestrator | 2025-06-11 14:49:44 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:44.371998 | orchestrator | 2025-06-11 14:49:44 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:44.372074 | orchestrator | 2025-06-11 14:49:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:47.411473 | orchestrator | 2025-06-11 14:49:47 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:47.413641 | orchestrator | 2025-06-11 14:49:47 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:47.416436 | orchestrator | 2025-06-11 14:49:47 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:47.418497 | orchestrator | 2025-06-11 14:49:47 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:47.418531 | orchestrator | 2025-06-11 14:49:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:50.472165 | orchestrator | 2025-06-11 14:49:50 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:50.476226 | orchestrator | 2025-06-11 14:49:50 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:50.476786 | orchestrator | 2025-06-11 14:49:50 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:50.479102 | orchestrator | 2025-06-11 14:49:50 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:50.479780 | orchestrator | 2025-06-11 14:49:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:53.523110 | orchestrator | 2025-06-11 14:49:53 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:53.524705 | orchestrator | 2025-06-11 14:49:53 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:53.525780 | orchestrator | 2025-06-11 14:49:53 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:53.527174 | orchestrator | 2025-06-11 14:49:53 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:53.527209 | orchestrator | 2025-06-11 14:49:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:56.566727 | orchestrator | 2025-06-11 14:49:56 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:56.567752 | orchestrator | 2025-06-11 14:49:56 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:56.569940 | orchestrator | 2025-06-11 14:49:56 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:56.571407 | orchestrator | 2025-06-11 14:49:56 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:56.571433 | orchestrator | 2025-06-11 14:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:49:59.617325 | orchestrator | 2025-06-11 14:49:59 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:49:59.618370 | orchestrator | 2025-06-11 14:49:59 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:49:59.620025 | orchestrator | 2025-06-11 14:49:59 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:49:59.622060 | orchestrator | 2025-06-11 14:49:59 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:49:59.622397 | orchestrator | 2025-06-11 14:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:02.669699 | orchestrator | 2025-06-11 14:50:02 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:02.670720 | orchestrator | 2025-06-11 14:50:02 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:02.675087 | orchestrator | 2025-06-11 14:50:02 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:02.678226 | orchestrator | 2025-06-11 14:50:02 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:02.678254 | orchestrator | 2025-06-11 14:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:05.725140 | orchestrator | 2025-06-11 14:50:05 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:05.731256 | orchestrator | 2025-06-11 14:50:05 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:05.731384 | orchestrator | 2025-06-11 14:50:05 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:05.732837 | orchestrator | 2025-06-11 14:50:05 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:05.732949 | orchestrator | 2025-06-11 14:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:08.788998 | orchestrator | 2025-06-11 14:50:08 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:08.789097 | orchestrator | 2025-06-11 14:50:08 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:08.789557 | orchestrator | 2025-06-11 14:50:08 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:08.790292 | orchestrator | 2025-06-11 14:50:08 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:08.790317 | orchestrator | 2025-06-11 14:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:11.837916 | orchestrator | 2025-06-11 14:50:11 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:11.838956 | orchestrator | 2025-06-11 14:50:11 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:11.840111 | orchestrator | 2025-06-11 14:50:11 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:11.841780 | orchestrator | 2025-06-11 14:50:11 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:11.841832 | orchestrator | 2025-06-11 14:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:14.888539 | orchestrator | 2025-06-11 14:50:14 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:14.888643 | orchestrator | 2025-06-11 14:50:14 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:14.888650 | orchestrator | 2025-06-11 14:50:14 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:14.889746 | orchestrator | 2025-06-11 14:50:14 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:14.889770 | orchestrator | 2025-06-11 14:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:17.927006 | orchestrator | 2025-06-11 14:50:17 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:17.927785 | orchestrator | 2025-06-11 14:50:17 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:17.929412 | orchestrator | 2025-06-11 14:50:17 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:17.930869 | orchestrator | 2025-06-11 14:50:17 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:17.930931 | orchestrator | 2025-06-11 14:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:20.975044 | orchestrator | 2025-06-11 14:50:20 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:20.976498 | orchestrator | 2025-06-11 14:50:20 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:20.977734 | orchestrator | 2025-06-11 14:50:20 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:20.979418 | orchestrator | 2025-06-11 14:50:20 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:20.979550 | orchestrator | 2025-06-11 14:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:24.013213 | orchestrator | 2025-06-11 14:50:24 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:24.013321 | orchestrator | 2025-06-11 14:50:24 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:24.014693 | orchestrator | 2025-06-11 14:50:24 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:24.015161 | orchestrator | 2025-06-11 14:50:24 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:24.015190 | orchestrator | 2025-06-11 14:50:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:27.053402 | orchestrator | 2025-06-11 14:50:27 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:27.053505 | orchestrator | 2025-06-11 14:50:27 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:27.053961 | orchestrator | 2025-06-11 14:50:27 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:27.054788 | orchestrator | 2025-06-11 14:50:27 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state STARTED 2025-06-11 14:50:27.054811 | orchestrator | 2025-06-11 14:50:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:30.085991 | orchestrator | 2025-06-11 14:50:30 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:30.086209 | orchestrator | 2025-06-11 14:50:30 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:30.086669 | orchestrator | 2025-06-11 14:50:30 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:30.087374 | orchestrator | 2025-06-11 14:50:30 | INFO  | Task 4a3abfd9-8332-467e-9514-a1529a1bf26d is in state SUCCESS 2025-06-11 14:50:30.087791 | orchestrator | 2025-06-11 14:50:30.087816 | orchestrator | 2025-06-11 14:50:30.087828 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-11 14:50:30.087840 | orchestrator | 2025-06-11 14:50:30.087851 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-11 14:50:30.087862 | orchestrator | Wednesday 11 June 2025 14:49:27 +0000 (0:00:00.164) 0:00:00.164 ******** 2025-06-11 14:50:30.087873 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-11 14:50:30.087884 | orchestrator | 2025-06-11 14:50:30.087895 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-11 14:50:30.087906 | orchestrator | Wednesday 11 June 2025 14:49:27 +0000 (0:00:00.743) 0:00:00.907 ******** 2025-06-11 14:50:30.087917 | orchestrator | changed: [testbed-manager] 2025-06-11 14:50:30.087928 | orchestrator | 2025-06-11 14:50:30.087938 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-11 14:50:30.087949 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:01.087) 0:00:01.994 ******** 2025-06-11 14:50:30.087960 | orchestrator | changed: [testbed-manager] 2025-06-11 14:50:30.087970 | orchestrator | 2025-06-11 14:50:30.087982 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:50:30.087994 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:50:30.088005 | orchestrator | 2025-06-11 14:50:30.088016 | orchestrator | 2025-06-11 14:50:30.088027 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:50:30.088083 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:00.383) 0:00:02.377 ******** 2025-06-11 14:50:30.088096 | orchestrator | =============================================================================== 2025-06-11 14:50:30.088107 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-06-11 14:50:30.088131 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2025-06-11 14:50:30.088172 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2025-06-11 14:50:30.088183 | orchestrator | 2025-06-11 14:50:30.088194 | orchestrator | 2025-06-11 14:50:30.088205 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-11 14:50:30.088215 | orchestrator | 2025-06-11 14:50:30.088285 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-11 14:50:30.088299 | orchestrator | Wednesday 11 June 2025 14:49:27 +0000 (0:00:00.162) 0:00:00.162 ******** 2025-06-11 14:50:30.088310 | orchestrator | ok: [testbed-manager] 2025-06-11 14:50:30.088321 | orchestrator | 2025-06-11 14:50:30.088332 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-11 14:50:30.088368 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:00.497) 0:00:00.660 ******** 2025-06-11 14:50:30.088380 | orchestrator | ok: [testbed-manager] 2025-06-11 14:50:30.088390 | orchestrator | 2025-06-11 14:50:30.088401 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-11 14:50:30.088411 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:00.498) 0:00:01.159 ******** 2025-06-11 14:50:30.088422 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-11 14:50:30.088433 | orchestrator | 2025-06-11 14:50:30.088443 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-11 14:50:30.088454 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:00.624) 0:00:01.783 ******** 2025-06-11 14:50:30.088465 | orchestrator | changed: [testbed-manager] 2025-06-11 14:50:30.088475 | orchestrator | 2025-06-11 14:50:30.088486 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-11 14:50:30.088497 | orchestrator | Wednesday 11 June 2025 14:49:30 +0000 (0:00:01.082) 0:00:02.865 ******** 2025-06-11 14:50:30.088520 | orchestrator | changed: [testbed-manager] 2025-06-11 14:50:30.088531 | orchestrator | 2025-06-11 14:50:30.088542 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-11 14:50:30.088553 | orchestrator | Wednesday 11 June 2025 14:49:31 +0000 (0:00:00.754) 0:00:03.620 ******** 2025-06-11 14:50:30.088563 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-11 14:50:30.088591 | orchestrator | 2025-06-11 14:50:30.088602 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-11 14:50:30.088613 | orchestrator | Wednesday 11 June 2025 14:49:32 +0000 (0:00:01.398) 0:00:05.018 ******** 2025-06-11 14:50:30.088623 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-11 14:50:30.088634 | orchestrator | 2025-06-11 14:50:30.088645 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-11 14:50:30.088655 | orchestrator | Wednesday 11 June 2025 14:49:33 +0000 (0:00:00.714) 0:00:05.732 ******** 2025-06-11 14:50:30.088666 | orchestrator | ok: [testbed-manager] 2025-06-11 14:50:30.088677 | orchestrator | 2025-06-11 14:50:30.088687 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-11 14:50:30.088698 | orchestrator | Wednesday 11 June 2025 14:49:33 +0000 (0:00:00.338) 0:00:06.071 ******** 2025-06-11 14:50:30.088708 | orchestrator | ok: [testbed-manager] 2025-06-11 14:50:30.088719 | orchestrator | 2025-06-11 14:50:30.088730 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:50:30.088741 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:50:30.088752 | orchestrator | 2025-06-11 14:50:30.088762 | orchestrator | 2025-06-11 14:50:30.088773 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:50:30.088784 | orchestrator | Wednesday 11 June 2025 14:49:33 +0000 (0:00:00.266) 0:00:06.337 ******** 2025-06-11 14:50:30.088794 | orchestrator | =============================================================================== 2025-06-11 14:50:30.088805 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.40s 2025-06-11 14:50:30.088815 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.08s 2025-06-11 14:50:30.088826 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.75s 2025-06-11 14:50:30.088848 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.71s 2025-06-11 14:50:30.088860 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.62s 2025-06-11 14:50:30.088871 | orchestrator | Create .kube directory -------------------------------------------------- 0.50s 2025-06-11 14:50:30.088881 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2025-06-11 14:50:30.088892 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-06-11 14:50:30.088903 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-06-11 14:50:30.088913 | orchestrator | 2025-06-11 14:50:30.088924 | orchestrator | 2025-06-11 14:50:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:30.089103 | orchestrator | 2025-06-11 14:50:30.089117 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-11 14:50:30.089127 | orchestrator | 2025-06-11 14:50:30.089138 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-11 14:50:30.089149 | orchestrator | Wednesday 11 June 2025 14:48:12 +0000 (0:00:00.277) 0:00:00.277 ******** 2025-06-11 14:50:30.089159 | orchestrator | ok: [localhost] => { 2025-06-11 14:50:30.089171 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-11 14:50:30.089182 | orchestrator | } 2025-06-11 14:50:30.089192 | orchestrator | 2025-06-11 14:50:30.089203 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-11 14:50:30.089214 | orchestrator | Wednesday 11 June 2025 14:48:12 +0000 (0:00:00.080) 0:00:00.357 ******** 2025-06-11 14:50:30.089237 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-11 14:50:30.089273 | orchestrator | ...ignoring 2025-06-11 14:50:30.089285 | orchestrator | 2025-06-11 14:50:30.089296 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-11 14:50:30.089306 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:03.592) 0:00:03.949 ******** 2025-06-11 14:50:30.089317 | orchestrator | skipping: [localhost] 2025-06-11 14:50:30.089327 | orchestrator | 2025-06-11 14:50:30.089338 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-11 14:50:30.089349 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.107) 0:00:04.057 ******** 2025-06-11 14:50:30.089360 | orchestrator | ok: [localhost] 2025-06-11 14:50:30.089370 | orchestrator | 2025-06-11 14:50:30.089381 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:50:30.089391 | orchestrator | 2025-06-11 14:50:30.089402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:50:30.089412 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.322) 0:00:04.379 ******** 2025-06-11 14:50:30.089423 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:50:30.089434 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:50:30.089444 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:50:30.089455 | orchestrator | 2025-06-11 14:50:30.089465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:50:30.089476 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.523) 0:00:04.903 ******** 2025-06-11 14:50:30.089486 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-11 14:50:30.089498 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-11 14:50:30.089508 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-11 14:50:30.089519 | orchestrator | 2025-06-11 14:50:30.089530 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-11 14:50:30.089540 | orchestrator | 2025-06-11 14:50:30.089551 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-11 14:50:30.089561 | orchestrator | Wednesday 11 June 2025 14:48:18 +0000 (0:00:01.445) 0:00:06.349 ******** 2025-06-11 14:50:30.089587 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:50:30.089598 | orchestrator | 2025-06-11 14:50:30.089609 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-11 14:50:30.089620 | orchestrator | Wednesday 11 June 2025 14:48:20 +0000 (0:00:01.612) 0:00:07.962 ******** 2025-06-11 14:50:30.089630 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:50:30.089641 | orchestrator | 2025-06-11 14:50:30.089651 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-11 14:50:30.089662 | orchestrator | Wednesday 11 June 2025 14:48:21 +0000 (0:00:01.443) 0:00:09.406 ******** 2025-06-11 14:50:30.089672 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.089683 | orchestrator | 2025-06-11 14:50:30.089693 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-11 14:50:30.089704 | orchestrator | Wednesday 11 June 2025 14:48:21 +0000 (0:00:00.330) 0:00:09.736 ******** 2025-06-11 14:50:30.089715 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.089727 | orchestrator | 2025-06-11 14:50:30.089739 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-11 14:50:30.089752 | orchestrator | Wednesday 11 June 2025 14:48:22 +0000 (0:00:00.318) 0:00:10.055 ******** 2025-06-11 14:50:30.089764 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.089775 | orchestrator | 2025-06-11 14:50:30.089788 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-11 14:50:30.089800 | orchestrator | Wednesday 11 June 2025 14:48:22 +0000 (0:00:00.308) 0:00:10.363 ******** 2025-06-11 14:50:30.089818 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.089830 | orchestrator | 2025-06-11 14:50:30.089842 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-11 14:50:30.089855 | orchestrator | Wednesday 11 June 2025 14:48:22 +0000 (0:00:00.440) 0:00:10.804 ******** 2025-06-11 14:50:30.089867 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:50:30.089880 | orchestrator | 2025-06-11 14:50:30.089892 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-11 14:50:30.089904 | orchestrator | Wednesday 11 June 2025 14:48:23 +0000 (0:00:00.710) 0:00:11.515 ******** 2025-06-11 14:50:30.089916 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:50:30.089928 | orchestrator | 2025-06-11 14:50:30.089941 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-11 14:50:30.089953 | orchestrator | Wednesday 11 June 2025 14:48:24 +0000 (0:00:01.044) 0:00:12.559 ******** 2025-06-11 14:50:30.089965 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.089978 | orchestrator | 2025-06-11 14:50:30.089990 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-11 14:50:30.090002 | orchestrator | Wednesday 11 June 2025 14:48:25 +0000 (0:00:00.362) 0:00:12.922 ******** 2025-06-11 14:50:30.090014 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.090068 | orchestrator | 2025-06-11 14:50:30.090096 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-11 14:50:30.090108 | orchestrator | Wednesday 11 June 2025 14:48:25 +0000 (0:00:00.532) 0:00:13.455 ******** 2025-06-11 14:50:30.090128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090179 | orchestrator | 2025-06-11 14:50:30.090190 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-11 14:50:30.090201 | orchestrator | Wednesday 11 June 2025 14:48:26 +0000 (0:00:01.048) 0:00:14.503 ******** 2025-06-11 14:50:30.090220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090282 | orchestrator | 2025-06-11 14:50:30.090304 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-11 14:50:30.090315 | orchestrator | Wednesday 11 June 2025 14:48:29 +0000 (0:00:03.216) 0:00:17.720 ******** 2025-06-11 14:50:30.090326 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-11 14:50:30.090337 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-11 14:50:30.090348 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-11 14:50:30.090358 | orchestrator | 2025-06-11 14:50:30.090369 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-11 14:50:30.090380 | orchestrator | Wednesday 11 June 2025 14:48:31 +0000 (0:00:01.510) 0:00:19.230 ******** 2025-06-11 14:50:30.090390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-11 14:50:30.090401 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-11 14:50:30.090411 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-11 14:50:30.090422 | orchestrator | 2025-06-11 14:50:30.090432 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-11 14:50:30.090443 | orchestrator | Wednesday 11 June 2025 14:48:33 +0000 (0:00:02.117) 0:00:21.348 ******** 2025-06-11 14:50:30.090454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-11 14:50:30.090464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-11 14:50:30.090475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-11 14:50:30.090485 | orchestrator | 2025-06-11 14:50:30.090496 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-11 14:50:30.090507 | orchestrator | Wednesday 11 June 2025 14:48:35 +0000 (0:00:01.611) 0:00:22.959 ******** 2025-06-11 14:50:30.090523 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-11 14:50:30.090535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-11 14:50:30.090545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-11 14:50:30.090556 | orchestrator | 2025-06-11 14:50:30.090567 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-11 14:50:30.090627 | orchestrator | Wednesday 11 June 2025 14:48:37 +0000 (0:00:02.580) 0:00:25.540 ******** 2025-06-11 14:50:30.090638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-11 14:50:30.090649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-11 14:50:30.090665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-11 14:50:30.090676 | orchestrator | 2025-06-11 14:50:30.090687 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-11 14:50:30.090697 | orchestrator | Wednesday 11 June 2025 14:48:39 +0000 (0:00:01.637) 0:00:27.177 ******** 2025-06-11 14:50:30.090708 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-11 14:50:30.090719 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-11 14:50:30.090730 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-11 14:50:30.090740 | orchestrator | 2025-06-11 14:50:30.090751 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-11 14:50:30.090768 | orchestrator | Wednesday 11 June 2025 14:48:40 +0000 (0:00:01.401) 0:00:28.579 ******** 2025-06-11 14:50:30.090779 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.090790 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:50:30.090801 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:50:30.090811 | orchestrator | 2025-06-11 14:50:30.090822 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-11 14:50:30.090832 | orchestrator | Wednesday 11 June 2025 14:48:41 +0000 (0:00:00.427) 0:00:29.007 ******** 2025-06-11 14:50:30.090844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:50:30.090889 | orchestrator | 2025-06-11 14:50:30.090900 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-11 14:50:30.090911 | orchestrator | Wednesday 11 June 2025 14:48:42 +0000 (0:00:01.387) 0:00:30.394 ******** 2025-06-11 14:50:30.090929 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:50:30.090939 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:50:30.090950 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:50:30.090960 | orchestrator | 2025-06-11 14:50:30.091033 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-11 14:50:30.091054 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:00.941) 0:00:31.335 ******** 2025-06-11 14:50:30.091063 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:50:30.091073 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:50:30.091082 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:50:30.091092 | orchestrator | 2025-06-11 14:50:30.091101 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-11 14:50:30.091111 | orchestrator | Wednesday 11 June 2025 14:48:52 +0000 (0:00:09.025) 0:00:40.361 ******** 2025-06-11 14:50:30.091120 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:50:30.091130 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:50:30.091139 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:50:30.091148 | orchestrator | 2025-06-11 14:50:30.091158 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-11 14:50:30.091168 | orchestrator | 2025-06-11 14:50:30.091177 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-11 14:50:30.091186 | orchestrator | Wednesday 11 June 2025 14:48:53 +0000 (0:00:00.564) 0:00:40.925 ******** 2025-06-11 14:50:30.091196 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:50:30.091205 | orchestrator | 2025-06-11 14:50:30.091215 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-11 14:50:30.091224 | orchestrator | Wednesday 11 June 2025 14:48:53 +0000 (0:00:00.666) 0:00:41.591 ******** 2025-06-11 14:50:30.091234 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:50:30.091243 | orchestrator | 2025-06-11 14:50:30.091252 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-11 14:50:30.091262 | orchestrator | Wednesday 11 June 2025 14:48:53 +0000 (0:00:00.266) 0:00:41.858 ******** 2025-06-11 14:50:30.091271 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:50:30.091281 | orchestrator | 2025-06-11 14:50:30.091290 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-11 14:50:30.091300 | orchestrator | Wednesday 11 June 2025 14:48:55 +0000 (0:00:01.840) 0:00:43.698 ******** 2025-06-11 14:50:30.091309 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:50:30.091318 | orchestrator | 2025-06-11 14:50:30.091328 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-11 14:50:30.091337 | orchestrator | 2025-06-11 14:50:30.091347 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-11 14:50:30.091356 | orchestrator | Wednesday 11 June 2025 14:49:50 +0000 (0:00:54.288) 0:01:37.987 ******** 2025-06-11 14:50:30.091365 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:50:30.091375 | orchestrator | 2025-06-11 14:50:30.091384 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-11 14:50:30.091393 | orchestrator | Wednesday 11 June 2025 14:49:50 +0000 (0:00:00.643) 0:01:38.631 ******** 2025-06-11 14:50:30.091403 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:50:30.091412 | orchestrator | 2025-06-11 14:50:30.091422 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-11 14:50:30.091432 | orchestrator | Wednesday 11 June 2025 14:49:51 +0000 (0:00:00.446) 0:01:39.078 ******** 2025-06-11 14:50:30.091441 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:50:30.091451 | orchestrator | 2025-06-11 14:50:30.091460 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-11 14:50:30.091470 | orchestrator | Wednesday 11 June 2025 14:49:53 +0000 (0:00:01.924) 0:01:41.002 ******** 2025-06-11 14:50:30.091479 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:50:30.091489 | orchestrator | 2025-06-11 14:50:30.091498 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-11 14:50:30.091513 | orchestrator | 2025-06-11 14:50:30.091523 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-11 14:50:30.091532 | orchestrator | Wednesday 11 June 2025 14:50:08 +0000 (0:00:15.397) 0:01:56.399 ******** 2025-06-11 14:50:30.091542 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:50:30.091551 | orchestrator | 2025-06-11 14:50:30.091561 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-11 14:50:30.091617 | orchestrator | Wednesday 11 June 2025 14:50:09 +0000 (0:00:00.664) 0:01:57.064 ******** 2025-06-11 14:50:30.091630 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:50:30.091639 | orchestrator | 2025-06-11 14:50:30.091649 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-11 14:50:30.091658 | orchestrator | Wednesday 11 June 2025 14:50:09 +0000 (0:00:00.221) 0:01:57.286 ******** 2025-06-11 14:50:30.091668 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:50:30.091677 | orchestrator | 2025-06-11 14:50:30.091686 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-11 14:50:30.091703 | orchestrator | Wednesday 11 June 2025 14:50:11 +0000 (0:00:01.696) 0:01:58.983 ******** 2025-06-11 14:50:30.091714 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:50:30.091723 | orchestrator | 2025-06-11 14:50:30.091731 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-11 14:50:30.091739 | orchestrator | 2025-06-11 14:50:30.091747 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-11 14:50:30.091754 | orchestrator | Wednesday 11 June 2025 14:50:25 +0000 (0:00:14.833) 0:02:13.816 ******** 2025-06-11 14:50:30.091762 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:50:30.091770 | orchestrator | 2025-06-11 14:50:30.091777 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-11 14:50:30.091785 | orchestrator | Wednesday 11 June 2025 14:50:26 +0000 (0:00:00.947) 0:02:14.764 ******** 2025-06-11 14:50:30.091793 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-11 14:50:30.091800 | orchestrator | enable_outward_rabbitmq_True 2025-06-11 14:50:30.091812 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-11 14:50:30.091820 | orchestrator | outward_rabbitmq_restart 2025-06-11 14:50:30.091828 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:50:30.091835 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:50:30.091843 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:50:30.091851 | orchestrator | 2025-06-11 14:50:30.091858 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-11 14:50:30.091866 | orchestrator | skipping: no hosts matched 2025-06-11 14:50:30.091874 | orchestrator | 2025-06-11 14:50:30.091881 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-11 14:50:30.091889 | orchestrator | skipping: no hosts matched 2025-06-11 14:50:30.091897 | orchestrator | 2025-06-11 14:50:30.091904 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-11 14:50:30.091912 | orchestrator | skipping: no hosts matched 2025-06-11 14:50:30.091920 | orchestrator | 2025-06-11 14:50:30.091927 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:50:30.091935 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-11 14:50:30.091943 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-11 14:50:30.091951 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:50:30.091959 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 14:50:30.091967 | orchestrator | 2025-06-11 14:50:30.091974 | orchestrator | 2025-06-11 14:50:30.091987 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:50:30.091995 | orchestrator | Wednesday 11 June 2025 14:50:29 +0000 (0:00:02.488) 0:02:17.252 ******** 2025-06-11 14:50:30.092003 | orchestrator | =============================================================================== 2025-06-11 14:50:30.092011 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.52s 2025-06-11 14:50:30.092018 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.03s 2025-06-11 14:50:30.092026 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.46s 2025-06-11 14:50:30.092034 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.59s 2025-06-11 14:50:30.092042 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.22s 2025-06-11 14:50:30.092049 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.58s 2025-06-11 14:50:30.092057 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2025-06-11 14:50:30.092064 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.12s 2025-06-11 14:50:30.092072 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2025-06-11 14:50:30.092080 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2025-06-11 14:50:30.092087 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.61s 2025-06-11 14:50:30.092095 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.61s 2025-06-11 14:50:30.092103 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2025-06-11 14:50:30.092110 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2025-06-11 14:50:30.092118 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.44s 2025-06-11 14:50:30.092126 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2025-06-11 14:50:30.092133 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.39s 2025-06-11 14:50:30.092141 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.05s 2025-06-11 14:50:30.092149 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2025-06-11 14:50:30.092156 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.95s 2025-06-11 14:50:33.126523 | orchestrator | 2025-06-11 14:50:33 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:33.127603 | orchestrator | 2025-06-11 14:50:33 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:33.129127 | orchestrator | 2025-06-11 14:50:33 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:33.129214 | orchestrator | 2025-06-11 14:50:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:36.172957 | orchestrator | 2025-06-11 14:50:36 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:36.173179 | orchestrator | 2025-06-11 14:50:36 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:36.173826 | orchestrator | 2025-06-11 14:50:36 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:36.173850 | orchestrator | 2025-06-11 14:50:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:39.213644 | orchestrator | 2025-06-11 14:50:39 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:39.214276 | orchestrator | 2025-06-11 14:50:39 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:39.215481 | orchestrator | 2025-06-11 14:50:39 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:39.215597 | orchestrator | 2025-06-11 14:50:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:42.275881 | orchestrator | 2025-06-11 14:50:42 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:42.277096 | orchestrator | 2025-06-11 14:50:42 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:42.278186 | orchestrator | 2025-06-11 14:50:42 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:42.278223 | orchestrator | 2025-06-11 14:50:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:45.330313 | orchestrator | 2025-06-11 14:50:45 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:45.330412 | orchestrator | 2025-06-11 14:50:45 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:45.332985 | orchestrator | 2025-06-11 14:50:45 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:45.333591 | orchestrator | 2025-06-11 14:50:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:48.383206 | orchestrator | 2025-06-11 14:50:48 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:48.384930 | orchestrator | 2025-06-11 14:50:48 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:48.386708 | orchestrator | 2025-06-11 14:50:48 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:48.387061 | orchestrator | 2025-06-11 14:50:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:51.425033 | orchestrator | 2025-06-11 14:50:51 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:51.425719 | orchestrator | 2025-06-11 14:50:51 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:51.426710 | orchestrator | 2025-06-11 14:50:51 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:51.426734 | orchestrator | 2025-06-11 14:50:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:54.473201 | orchestrator | 2025-06-11 14:50:54 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:54.474628 | orchestrator | 2025-06-11 14:50:54 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:54.476162 | orchestrator | 2025-06-11 14:50:54 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:54.476279 | orchestrator | 2025-06-11 14:50:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:50:57.528633 | orchestrator | 2025-06-11 14:50:57 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:50:57.529602 | orchestrator | 2025-06-11 14:50:57 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:50:57.529650 | orchestrator | 2025-06-11 14:50:57 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:50:57.529664 | orchestrator | 2025-06-11 14:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:00.560679 | orchestrator | 2025-06-11 14:51:00 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:00.562490 | orchestrator | 2025-06-11 14:51:00 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:00.563896 | orchestrator | 2025-06-11 14:51:00 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:00.563925 | orchestrator | 2025-06-11 14:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:03.602977 | orchestrator | 2025-06-11 14:51:03 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:03.604747 | orchestrator | 2025-06-11 14:51:03 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:03.606648 | orchestrator | 2025-06-11 14:51:03 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:03.606759 | orchestrator | 2025-06-11 14:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:06.650522 | orchestrator | 2025-06-11 14:51:06 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:06.652684 | orchestrator | 2025-06-11 14:51:06 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:06.654924 | orchestrator | 2025-06-11 14:51:06 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:06.655078 | orchestrator | 2025-06-11 14:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:09.714320 | orchestrator | 2025-06-11 14:51:09 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:09.715309 | orchestrator | 2025-06-11 14:51:09 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:09.717753 | orchestrator | 2025-06-11 14:51:09 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:09.717788 | orchestrator | 2025-06-11 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:12.767588 | orchestrator | 2025-06-11 14:51:12 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:12.769445 | orchestrator | 2025-06-11 14:51:12 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:12.769787 | orchestrator | 2025-06-11 14:51:12 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:12.771000 | orchestrator | 2025-06-11 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:15.803268 | orchestrator | 2025-06-11 14:51:15 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:15.803440 | orchestrator | 2025-06-11 14:51:15 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:15.805163 | orchestrator | 2025-06-11 14:51:15 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:15.805190 | orchestrator | 2025-06-11 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:18.865185 | orchestrator | 2025-06-11 14:51:18 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:18.866458 | orchestrator | 2025-06-11 14:51:18 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:18.868598 | orchestrator | 2025-06-11 14:51:18 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:18.868697 | orchestrator | 2025-06-11 14:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:21.912401 | orchestrator | 2025-06-11 14:51:21 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:21.915377 | orchestrator | 2025-06-11 14:51:21 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:21.917600 | orchestrator | 2025-06-11 14:51:21 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:21.919923 | orchestrator | 2025-06-11 14:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:24.956694 | orchestrator | 2025-06-11 14:51:24 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:24.956830 | orchestrator | 2025-06-11 14:51:24 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state STARTED 2025-06-11 14:51:24.956846 | orchestrator | 2025-06-11 14:51:24 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:24.956858 | orchestrator | 2025-06-11 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:27.999103 | orchestrator | 2025-06-11 14:51:27 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:28.002258 | orchestrator | 2025-06-11 14:51:28 | INFO  | Task a6c20dfb-395b-47e7-87ad-2692dd7de8a4 is in state SUCCESS 2025-06-11 14:51:28.005241 | orchestrator | 2025-06-11 14:51:28.005283 | orchestrator | 2025-06-11 14:51:28.005295 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:51:28.005306 | orchestrator | 2025-06-11 14:51:28.005316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:51:28.005327 | orchestrator | Wednesday 11 June 2025 14:49:03 +0000 (0:00:00.160) 0:00:00.160 ******** 2025-06-11 14:51:28.005337 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.005349 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.005358 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.005368 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:51:28.005377 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:51:28.005387 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:51:28.005396 | orchestrator | 2025-06-11 14:51:28.005407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:51:28.005417 | orchestrator | Wednesday 11 June 2025 14:49:03 +0000 (0:00:00.558) 0:00:00.718 ******** 2025-06-11 14:51:28.005428 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-11 14:51:28.005438 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-11 14:51:28.005456 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-11 14:51:28.005466 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-11 14:51:28.005476 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-11 14:51:28.005486 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-11 14:51:28.005495 | orchestrator | 2025-06-11 14:51:28.005529 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-11 14:51:28.005541 | orchestrator | 2025-06-11 14:51:28.005551 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-11 14:51:28.005561 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:00.984) 0:00:01.703 ******** 2025-06-11 14:51:28.005572 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:51:28.005583 | orchestrator | 2025-06-11 14:51:28.005592 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-11 14:51:28.005602 | orchestrator | Wednesday 11 June 2025 14:49:06 +0000 (0:00:01.218) 0:00:02.921 ******** 2025-06-11 14:51:28.005614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005697 | orchestrator | 2025-06-11 14:51:28.005720 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-11 14:51:28.005730 | orchestrator | Wednesday 11 June 2025 14:49:08 +0000 (0:00:01.928) 0:00:04.850 ******** 2025-06-11 14:51:28.005740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005811 | orchestrator | 2025-06-11 14:51:28.005822 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-11 14:51:28.005834 | orchestrator | Wednesday 11 June 2025 14:49:09 +0000 (0:00:01.844) 0:00:06.695 ******** 2025-06-11 14:51:28.005846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005926 | orchestrator | 2025-06-11 14:51:28.005937 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-11 14:51:28.005948 | orchestrator | Wednesday 11 June 2025 14:49:11 +0000 (0:00:01.973) 0:00:08.668 ******** 2025-06-11 14:51:28.005960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.005988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006097 | orchestrator | 2025-06-11 14:51:28.006115 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-11 14:51:28.006127 | orchestrator | Wednesday 11 June 2025 14:49:14 +0000 (0:00:02.419) 0:00:11.088 ******** 2025-06-11 14:51:28.006139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.006214 | orchestrator | 2025-06-11 14:51:28.006223 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-11 14:51:28.006233 | orchestrator | Wednesday 11 June 2025 14:49:16 +0000 (0:00:01.698) 0:00:12.786 ******** 2025-06-11 14:51:28.006243 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.006252 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.006262 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.006271 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:51:28.006281 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:51:28.006290 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:51:28.006300 | orchestrator | 2025-06-11 14:51:28.006309 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-11 14:51:28.006319 | orchestrator | Wednesday 11 June 2025 14:49:18 +0000 (0:00:02.502) 0:00:15.291 ******** 2025-06-11 14:51:28.006329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-11 14:51:28.006339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-11 14:51:28.006348 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-11 14:51:28.006357 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-11 14:51:28.006367 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-11 14:51:28.006376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-11 14:51:28.006395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-11 14:51:28.006448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006490 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006534 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-11 14:51:28.006544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006554 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006573 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006583 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006592 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006602 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-11 14:51:28.006611 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006630 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006659 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-11 14:51:28.006668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006697 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006710 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-11 14:51:28.006726 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-11 14:51:28.006742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-11 14:51:28.006764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-11 14:51:28.006787 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-11 14:51:28.006802 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-11 14:51:28.006816 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-11 14:51:28.006841 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-11 14:51:28.006856 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-11 14:51:28.006882 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-11 14:51:28.006893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-11 14:51:28.006903 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-11 14:51:28.006913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-11 14:51:28.006922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-11 14:51:28.006937 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-11 14:51:28.006947 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-11 14:51:28.006956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-11 14:51:28.006966 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-11 14:51:28.006976 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-11 14:51:28.006985 | orchestrator | 2025-06-11 14:51:28.006995 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007005 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:18.551) 0:00:33.842 ******** 2025-06-11 14:51:28.007014 | orchestrator | 2025-06-11 14:51:28.007024 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007033 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.062) 0:00:33.905 ******** 2025-06-11 14:51:28.007042 | orchestrator | 2025-06-11 14:51:28.007052 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007061 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.062) 0:00:33.967 ******** 2025-06-11 14:51:28.007071 | orchestrator | 2025-06-11 14:51:28.007081 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007090 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.066) 0:00:34.034 ******** 2025-06-11 14:51:28.007100 | orchestrator | 2025-06-11 14:51:28.007109 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007119 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.063) 0:00:34.097 ******** 2025-06-11 14:51:28.007128 | orchestrator | 2025-06-11 14:51:28.007137 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-11 14:51:28.007147 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.062) 0:00:34.160 ******** 2025-06-11 14:51:28.007156 | orchestrator | 2025-06-11 14:51:28.007166 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-11 14:51:28.007175 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:00.064) 0:00:34.225 ******** 2025-06-11 14:51:28.007185 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007195 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007204 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:51:28.007214 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:51:28.007223 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:51:28.007232 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007242 | orchestrator | 2025-06-11 14:51:28.007259 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-11 14:51:28.007269 | orchestrator | Wednesday 11 June 2025 14:49:39 +0000 (0:00:01.822) 0:00:36.047 ******** 2025-06-11 14:51:28.007278 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.007288 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.007297 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.007306 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:51:28.007316 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:51:28.007325 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:51:28.007335 | orchestrator | 2025-06-11 14:51:28.007344 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-11 14:51:28.007354 | orchestrator | 2025-06-11 14:51:28.007363 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-11 14:51:28.007373 | orchestrator | Wednesday 11 June 2025 14:50:12 +0000 (0:00:33.692) 0:01:09.739 ******** 2025-06-11 14:51:28.007382 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:51:28.007392 | orchestrator | 2025-06-11 14:51:28.007402 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-11 14:51:28.007411 | orchestrator | Wednesday 11 June 2025 14:50:13 +0000 (0:00:00.533) 0:01:10.273 ******** 2025-06-11 14:51:28.007421 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:51:28.007431 | orchestrator | 2025-06-11 14:51:28.007440 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-11 14:51:28.007450 | orchestrator | Wednesday 11 June 2025 14:50:14 +0000 (0:00:00.672) 0:01:10.946 ******** 2025-06-11 14:51:28.007459 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007469 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007478 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007487 | orchestrator | 2025-06-11 14:51:28.007497 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-11 14:51:28.007575 | orchestrator | Wednesday 11 June 2025 14:50:14 +0000 (0:00:00.758) 0:01:11.704 ******** 2025-06-11 14:51:28.007589 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007598 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007608 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007624 | orchestrator | 2025-06-11 14:51:28.007641 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-11 14:51:28.007661 | orchestrator | Wednesday 11 June 2025 14:50:15 +0000 (0:00:00.373) 0:01:12.078 ******** 2025-06-11 14:51:28.007684 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007698 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007712 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007726 | orchestrator | 2025-06-11 14:51:28.007741 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-11 14:51:28.007758 | orchestrator | Wednesday 11 June 2025 14:50:15 +0000 (0:00:00.353) 0:01:12.431 ******** 2025-06-11 14:51:28.007769 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007779 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007788 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007797 | orchestrator | 2025-06-11 14:51:28.007807 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-11 14:51:28.007817 | orchestrator | Wednesday 11 June 2025 14:50:16 +0000 (0:00:00.654) 0:01:13.086 ******** 2025-06-11 14:51:28.007826 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.007835 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.007857 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.007867 | orchestrator | 2025-06-11 14:51:28.007876 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-11 14:51:28.007886 | orchestrator | Wednesday 11 June 2025 14:50:16 +0000 (0:00:00.405) 0:01:13.491 ******** 2025-06-11 14:51:28.007896 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.007905 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.007923 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.007932 | orchestrator | 2025-06-11 14:51:28.007942 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-11 14:51:28.007951 | orchestrator | Wednesday 11 June 2025 14:50:16 +0000 (0:00:00.284) 0:01:13.776 ******** 2025-06-11 14:51:28.007961 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.007970 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.007980 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.007989 | orchestrator | 2025-06-11 14:51:28.007999 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-11 14:51:28.008009 | orchestrator | Wednesday 11 June 2025 14:50:17 +0000 (0:00:00.287) 0:01:14.064 ******** 2025-06-11 14:51:28.008018 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008027 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008037 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008046 | orchestrator | 2025-06-11 14:51:28.008056 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-11 14:51:28.008065 | orchestrator | Wednesday 11 June 2025 14:50:17 +0000 (0:00:00.434) 0:01:14.498 ******** 2025-06-11 14:51:28.008075 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008084 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008094 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008103 | orchestrator | 2025-06-11 14:51:28.008112 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-11 14:51:28.008122 | orchestrator | Wednesday 11 June 2025 14:50:18 +0000 (0:00:00.313) 0:01:14.812 ******** 2025-06-11 14:51:28.008132 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008141 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008150 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008160 | orchestrator | 2025-06-11 14:51:28.008169 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-11 14:51:28.008179 | orchestrator | Wednesday 11 June 2025 14:50:18 +0000 (0:00:00.277) 0:01:15.089 ******** 2025-06-11 14:51:28.008188 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008198 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008207 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008216 | orchestrator | 2025-06-11 14:51:28.008226 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-11 14:51:28.008235 | orchestrator | Wednesday 11 June 2025 14:50:18 +0000 (0:00:00.290) 0:01:15.379 ******** 2025-06-11 14:51:28.008245 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008254 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008264 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008273 | orchestrator | 2025-06-11 14:51:28.008283 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-11 14:51:28.008292 | orchestrator | Wednesday 11 June 2025 14:50:19 +0000 (0:00:00.459) 0:01:15.839 ******** 2025-06-11 14:51:28.008302 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008311 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008321 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008330 | orchestrator | 2025-06-11 14:51:28.008339 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-11 14:51:28.008349 | orchestrator | Wednesday 11 June 2025 14:50:19 +0000 (0:00:00.322) 0:01:16.162 ******** 2025-06-11 14:51:28.008359 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008368 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008378 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008387 | orchestrator | 2025-06-11 14:51:28.008397 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-11 14:51:28.008406 | orchestrator | Wednesday 11 June 2025 14:50:19 +0000 (0:00:00.285) 0:01:16.447 ******** 2025-06-11 14:51:28.008416 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008425 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008435 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008449 | orchestrator | 2025-06-11 14:51:28.008459 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-11 14:51:28.008468 | orchestrator | Wednesday 11 June 2025 14:50:19 +0000 (0:00:00.319) 0:01:16.766 ******** 2025-06-11 14:51:28.008478 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008487 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008497 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008528 | orchestrator | 2025-06-11 14:51:28.008539 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-11 14:51:28.008549 | orchestrator | Wednesday 11 June 2025 14:50:20 +0000 (0:00:00.488) 0:01:17.255 ******** 2025-06-11 14:51:28.008559 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008568 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008585 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008595 | orchestrator | 2025-06-11 14:51:28.008605 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-11 14:51:28.008614 | orchestrator | Wednesday 11 June 2025 14:50:20 +0000 (0:00:00.277) 0:01:17.533 ******** 2025-06-11 14:51:28.008624 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:51:28.008633 | orchestrator | 2025-06-11 14:51:28.008643 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-11 14:51:28.008652 | orchestrator | Wednesday 11 June 2025 14:50:21 +0000 (0:00:00.542) 0:01:18.075 ******** 2025-06-11 14:51:28.008662 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.008671 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.008680 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.008690 | orchestrator | 2025-06-11 14:51:28.008699 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-11 14:51:28.008709 | orchestrator | Wednesday 11 June 2025 14:50:22 +0000 (0:00:00.806) 0:01:18.881 ******** 2025-06-11 14:51:28.008718 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.008732 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.008742 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.008751 | orchestrator | 2025-06-11 14:51:28.008761 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-11 14:51:28.008770 | orchestrator | Wednesday 11 June 2025 14:50:22 +0000 (0:00:00.472) 0:01:19.354 ******** 2025-06-11 14:51:28.008780 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008789 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008798 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008808 | orchestrator | 2025-06-11 14:51:28.008817 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-11 14:51:28.008826 | orchestrator | Wednesday 11 June 2025 14:50:22 +0000 (0:00:00.319) 0:01:19.674 ******** 2025-06-11 14:51:28.008836 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008845 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008855 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008864 | orchestrator | 2025-06-11 14:51:28.008873 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-11 14:51:28.008883 | orchestrator | Wednesday 11 June 2025 14:50:23 +0000 (0:00:00.339) 0:01:20.014 ******** 2025-06-11 14:51:28.008893 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008902 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008911 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008921 | orchestrator | 2025-06-11 14:51:28.008930 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-11 14:51:28.008940 | orchestrator | Wednesday 11 June 2025 14:50:23 +0000 (0:00:00.518) 0:01:20.533 ******** 2025-06-11 14:51:28.008949 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.008959 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.008968 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.008977 | orchestrator | 2025-06-11 14:51:28.008987 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-11 14:51:28.009003 | orchestrator | Wednesday 11 June 2025 14:50:24 +0000 (0:00:00.417) 0:01:20.950 ******** 2025-06-11 14:51:28.009012 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.009022 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.009031 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.009041 | orchestrator | 2025-06-11 14:51:28.009050 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-11 14:51:28.009059 | orchestrator | Wednesday 11 June 2025 14:50:24 +0000 (0:00:00.592) 0:01:21.543 ******** 2025-06-11 14:51:28.009069 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.009078 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.009088 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.009097 | orchestrator | 2025-06-11 14:51:28.009107 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-11 14:51:28.009116 | orchestrator | Wednesday 11 June 2025 14:50:25 +0000 (0:00:00.396) 0:01:21.940 ******** 2025-06-11 14:51:28.009127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009298 | orchestrator | 2025-06-11 14:51:28.009313 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-11 14:51:28.009330 | orchestrator | Wednesday 11 June 2025 14:50:27 +0000 (0:00:02.207) 0:01:24.147 ******** 2025-06-11 14:51:28.009346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009541 | orchestrator | 2025-06-11 14:51:28.009557 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-11 14:51:28.009573 | orchestrator | Wednesday 11 June 2025 14:50:31 +0000 (0:00:03.810) 0:01:27.958 ******** 2025-06-11 14:51:28.009590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.009762 | orchestrator | 2025-06-11 14:51:28.009778 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.009795 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:01.920) 0:01:29.878 ******** 2025-06-11 14:51:28.009811 | orchestrator | 2025-06-11 14:51:28.009827 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.009842 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.059) 0:01:29.937 ******** 2025-06-11 14:51:28.009858 | orchestrator | 2025-06-11 14:51:28.009875 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.009892 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.061) 0:01:29.999 ******** 2025-06-11 14:51:28.009908 | orchestrator | 2025-06-11 14:51:28.009924 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-11 14:51:28.009940 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.060) 0:01:30.059 ******** 2025-06-11 14:51:28.009957 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.009974 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.009991 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.010007 | orchestrator | 2025-06-11 14:51:28.010077 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-11 14:51:28.010096 | orchestrator | Wednesday 11 June 2025 14:50:36 +0000 (0:00:02.852) 0:01:32.911 ******** 2025-06-11 14:51:28.010113 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.010129 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.010145 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.010162 | orchestrator | 2025-06-11 14:51:28.010178 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-11 14:51:28.010193 | orchestrator | Wednesday 11 June 2025 14:50:39 +0000 (0:00:03.117) 0:01:36.028 ******** 2025-06-11 14:51:28.010209 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.010224 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.010241 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.010257 | orchestrator | 2025-06-11 14:51:28.010273 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-11 14:51:28.010289 | orchestrator | Wednesday 11 June 2025 14:50:47 +0000 (0:00:07.773) 0:01:43.801 ******** 2025-06-11 14:51:28.010305 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.010322 | orchestrator | 2025-06-11 14:51:28.010340 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-11 14:51:28.010356 | orchestrator | Wednesday 11 June 2025 14:50:47 +0000 (0:00:00.125) 0:01:43.927 ******** 2025-06-11 14:51:28.010373 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.010390 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.010401 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.010410 | orchestrator | 2025-06-11 14:51:28.010420 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-11 14:51:28.010430 | orchestrator | Wednesday 11 June 2025 14:50:48 +0000 (0:00:00.932) 0:01:44.859 ******** 2025-06-11 14:51:28.010439 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.010449 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.010461 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.010477 | orchestrator | 2025-06-11 14:51:28.010494 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-11 14:51:28.010562 | orchestrator | Wednesday 11 June 2025 14:50:49 +0000 (0:00:00.965) 0:01:45.824 ******** 2025-06-11 14:51:28.010582 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.010600 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.010618 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.010635 | orchestrator | 2025-06-11 14:51:28.010652 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-11 14:51:28.010712 | orchestrator | Wednesday 11 June 2025 14:50:49 +0000 (0:00:00.787) 0:01:46.612 ******** 2025-06-11 14:51:28.010734 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.010751 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.010770 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.010788 | orchestrator | 2025-06-11 14:51:28.010806 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-11 14:51:28.010825 | orchestrator | Wednesday 11 June 2025 14:50:50 +0000 (0:00:00.638) 0:01:47.251 ******** 2025-06-11 14:51:28.010842 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.010861 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.010893 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.010912 | orchestrator | 2025-06-11 14:51:28.010930 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-11 14:51:28.010948 | orchestrator | Wednesday 11 June 2025 14:50:51 +0000 (0:00:00.977) 0:01:48.229 ******** 2025-06-11 14:51:28.010966 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.010984 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.011002 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.011021 | orchestrator | 2025-06-11 14:51:28.011039 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-11 14:51:28.011056 | orchestrator | Wednesday 11 June 2025 14:50:52 +0000 (0:00:01.333) 0:01:49.563 ******** 2025-06-11 14:51:28.011072 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.011090 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.011107 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.011124 | orchestrator | 2025-06-11 14:51:28.011141 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-11 14:51:28.011158 | orchestrator | Wednesday 11 June 2025 14:50:53 +0000 (0:00:00.315) 0:01:49.878 ******** 2025-06-11 14:51:28.011185 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011203 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011261 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011292 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011331 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011352 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011363 | orchestrator | 2025-06-11 14:51:28.011377 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-11 14:51:28.011393 | orchestrator | Wednesday 11 June 2025 14:50:54 +0000 (0:00:01.567) 0:01:51.446 ******** 2025-06-11 14:51:28.011410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011435 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011454 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011623 | orchestrator | 2025-06-11 14:51:28.011639 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-11 14:51:28.011654 | orchestrator | Wednesday 11 June 2025 14:50:58 +0000 (0:00:04.062) 0:01:55.508 ******** 2025-06-11 14:51:28.011680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011697 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011722 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011757 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011821 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 14:51:28.011854 | orchestrator | 2025-06-11 14:51:28.011872 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.011888 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:03.396) 0:01:58.905 ******** 2025-06-11 14:51:28.011904 | orchestrator | 2025-06-11 14:51:28.011920 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.011937 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:00.064) 0:01:58.970 ******** 2025-06-11 14:51:28.011951 | orchestrator | 2025-06-11 14:51:28.011960 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-11 14:51:28.011970 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:00.062) 0:01:59.033 ******** 2025-06-11 14:51:28.011979 | orchestrator | 2025-06-11 14:51:28.011989 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-11 14:51:28.011998 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:00.072) 0:01:59.105 ******** 2025-06-11 14:51:28.012008 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.012018 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.012027 | orchestrator | 2025-06-11 14:51:28.012044 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-11 14:51:28.012054 | orchestrator | Wednesday 11 June 2025 14:51:08 +0000 (0:00:06.206) 0:02:05.312 ******** 2025-06-11 14:51:28.012064 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.012073 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.012082 | orchestrator | 2025-06-11 14:51:28.012092 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-11 14:51:28.012105 | orchestrator | Wednesday 11 June 2025 14:51:14 +0000 (0:00:06.216) 0:02:11.529 ******** 2025-06-11 14:51:28.012118 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:51:28.012130 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:51:28.012144 | orchestrator | 2025-06-11 14:51:28.012157 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-11 14:51:28.012169 | orchestrator | Wednesday 11 June 2025 14:51:20 +0000 (0:00:06.234) 0:02:17.763 ******** 2025-06-11 14:51:28.012183 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:51:28.012196 | orchestrator | 2025-06-11 14:51:28.012209 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-11 14:51:28.012239 | orchestrator | Wednesday 11 June 2025 14:51:21 +0000 (0:00:00.132) 0:02:17.896 ******** 2025-06-11 14:51:28.012252 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.012266 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.012280 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.012293 | orchestrator | 2025-06-11 14:51:28.012306 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-11 14:51:28.012314 | orchestrator | Wednesday 11 June 2025 14:51:22 +0000 (0:00:00.964) 0:02:18.860 ******** 2025-06-11 14:51:28.012322 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.012330 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.012341 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.012354 | orchestrator | 2025-06-11 14:51:28.012367 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-11 14:51:28.012381 | orchestrator | Wednesday 11 June 2025 14:51:22 +0000 (0:00:00.538) 0:02:19.399 ******** 2025-06-11 14:51:28.012394 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.012407 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.012419 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.012433 | orchestrator | 2025-06-11 14:51:28.012441 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-11 14:51:28.012449 | orchestrator | Wednesday 11 June 2025 14:51:23 +0000 (0:00:00.743) 0:02:20.143 ******** 2025-06-11 14:51:28.012457 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:51:28.012465 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:51:28.012473 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:51:28.012480 | orchestrator | 2025-06-11 14:51:28.012488 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-11 14:51:28.012496 | orchestrator | Wednesday 11 June 2025 14:51:23 +0000 (0:00:00.584) 0:02:20.728 ******** 2025-06-11 14:51:28.012504 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.012540 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.012558 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.012577 | orchestrator | 2025-06-11 14:51:28.012590 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-11 14:51:28.012602 | orchestrator | Wednesday 11 June 2025 14:51:24 +0000 (0:00:00.939) 0:02:21.668 ******** 2025-06-11 14:51:28.012614 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:51:28.012626 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:51:28.012638 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:51:28.012651 | orchestrator | 2025-06-11 14:51:28.012664 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:51:28.012677 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-11 14:51:28.012691 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-11 14:51:28.012704 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-11 14:51:28.012719 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:51:28.012734 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:51:28.012743 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:51:28.012751 | orchestrator | 2025-06-11 14:51:28.012759 | orchestrator | 2025-06-11 14:51:28.012766 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:51:28.012774 | orchestrator | Wednesday 11 June 2025 14:51:25 +0000 (0:00:00.824) 0:02:22.493 ******** 2025-06-11 14:51:28.012782 | orchestrator | =============================================================================== 2025-06-11 14:51:28.012798 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.69s 2025-06-11 14:51:28.012806 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.55s 2025-06-11 14:51:28.012814 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.01s 2025-06-11 14:51:28.012821 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.33s 2025-06-11 14:51:28.012829 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.06s 2025-06-11 14:51:28.012836 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.06s 2025-06-11 14:51:28.012844 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2025-06-11 14:51:28.012860 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.40s 2025-06-11 14:51:28.012868 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.50s 2025-06-11 14:51:28.012876 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.42s 2025-06-11 14:51:28.012884 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.21s 2025-06-11 14:51:28.012891 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.97s 2025-06-11 14:51:28.012899 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.93s 2025-06-11 14:51:28.012907 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.92s 2025-06-11 14:51:28.012914 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.84s 2025-06-11 14:51:28.012922 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.82s 2025-06-11 14:51:28.012934 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.70s 2025-06-11 14:51:28.012942 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-06-11 14:51:28.012950 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.33s 2025-06-11 14:51:28.012957 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.22s 2025-06-11 14:51:28.012965 | orchestrator | 2025-06-11 14:51:28 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:28.012974 | orchestrator | 2025-06-11 14:51:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:31.055041 | orchestrator | 2025-06-11 14:51:31 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:31.055366 | orchestrator | 2025-06-11 14:51:31 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:31.055530 | orchestrator | 2025-06-11 14:51:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:34.110647 | orchestrator | 2025-06-11 14:51:34 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:34.112602 | orchestrator | 2025-06-11 14:51:34 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:34.112680 | orchestrator | 2025-06-11 14:51:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:37.152477 | orchestrator | 2025-06-11 14:51:37 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:37.154239 | orchestrator | 2025-06-11 14:51:37 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:37.154279 | orchestrator | 2025-06-11 14:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:40.199808 | orchestrator | 2025-06-11 14:51:40 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:40.203792 | orchestrator | 2025-06-11 14:51:40 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:40.203895 | orchestrator | 2025-06-11 14:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:43.261652 | orchestrator | 2025-06-11 14:51:43 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:43.261764 | orchestrator | 2025-06-11 14:51:43 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:43.261781 | orchestrator | 2025-06-11 14:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:46.328385 | orchestrator | 2025-06-11 14:51:46 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:46.329880 | orchestrator | 2025-06-11 14:51:46 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:46.330591 | orchestrator | 2025-06-11 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:49.381187 | orchestrator | 2025-06-11 14:51:49 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:49.381673 | orchestrator | 2025-06-11 14:51:49 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:49.383172 | orchestrator | 2025-06-11 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:52.427534 | orchestrator | 2025-06-11 14:51:52 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:52.429473 | orchestrator | 2025-06-11 14:51:52 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:52.429526 | orchestrator | 2025-06-11 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:55.460327 | orchestrator | 2025-06-11 14:51:55 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:55.461629 | orchestrator | 2025-06-11 14:51:55 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:55.462107 | orchestrator | 2025-06-11 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:51:58.508704 | orchestrator | 2025-06-11 14:51:58 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:51:58.508834 | orchestrator | 2025-06-11 14:51:58 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:51:58.508852 | orchestrator | 2025-06-11 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:01.552294 | orchestrator | 2025-06-11 14:52:01 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:01.553714 | orchestrator | 2025-06-11 14:52:01 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:01.553865 | orchestrator | 2025-06-11 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:04.591904 | orchestrator | 2025-06-11 14:52:04 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:04.591992 | orchestrator | 2025-06-11 14:52:04 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:04.592001 | orchestrator | 2025-06-11 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:07.620408 | orchestrator | 2025-06-11 14:52:07 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:07.622242 | orchestrator | 2025-06-11 14:52:07 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:07.622285 | orchestrator | 2025-06-11 14:52:07 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:07.623705 | orchestrator | 2025-06-11 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:10.659697 | orchestrator | 2025-06-11 14:52:10 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:10.661127 | orchestrator | 2025-06-11 14:52:10 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:10.661167 | orchestrator | 2025-06-11 14:52:10 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:10.661180 | orchestrator | 2025-06-11 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:13.708142 | orchestrator | 2025-06-11 14:52:13 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:13.708246 | orchestrator | 2025-06-11 14:52:13 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:13.710327 | orchestrator | 2025-06-11 14:52:13 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:13.710713 | orchestrator | 2025-06-11 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:16.754101 | orchestrator | 2025-06-11 14:52:16 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:16.755559 | orchestrator | 2025-06-11 14:52:16 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:16.756878 | orchestrator | 2025-06-11 14:52:16 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:16.758067 | orchestrator | 2025-06-11 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:19.801812 | orchestrator | 2025-06-11 14:52:19 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:19.801899 | orchestrator | 2025-06-11 14:52:19 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:19.802497 | orchestrator | 2025-06-11 14:52:19 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:19.802515 | orchestrator | 2025-06-11 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:22.841835 | orchestrator | 2025-06-11 14:52:22 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:22.843418 | orchestrator | 2025-06-11 14:52:22 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:22.845262 | orchestrator | 2025-06-11 14:52:22 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state STARTED 2025-06-11 14:52:22.845669 | orchestrator | 2025-06-11 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:25.873599 | orchestrator | 2025-06-11 14:52:25 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:25.874235 | orchestrator | 2025-06-11 14:52:25 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:25.874652 | orchestrator | 2025-06-11 14:52:25 | INFO  | Task 8153fee6-f58a-422c-b5fd-8b0ac531b496 is in state SUCCESS 2025-06-11 14:52:25.874676 | orchestrator | 2025-06-11 14:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:28.919716 | orchestrator | 2025-06-11 14:52:28 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:28.920732 | orchestrator | 2025-06-11 14:52:28 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:28.920766 | orchestrator | 2025-06-11 14:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:31.970305 | orchestrator | 2025-06-11 14:52:31 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:31.970444 | orchestrator | 2025-06-11 14:52:31 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:31.970541 | orchestrator | 2025-06-11 14:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:35.022352 | orchestrator | 2025-06-11 14:52:35 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:35.023782 | orchestrator | 2025-06-11 14:52:35 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:35.023830 | orchestrator | 2025-06-11 14:52:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:38.068966 | orchestrator | 2025-06-11 14:52:38 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:38.069774 | orchestrator | 2025-06-11 14:52:38 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:38.069807 | orchestrator | 2025-06-11 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:41.117217 | orchestrator | 2025-06-11 14:52:41 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:41.119648 | orchestrator | 2025-06-11 14:52:41 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:41.119709 | orchestrator | 2025-06-11 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:44.167200 | orchestrator | 2025-06-11 14:52:44 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:44.167621 | orchestrator | 2025-06-11 14:52:44 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:44.167652 | orchestrator | 2025-06-11 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:47.209494 | orchestrator | 2025-06-11 14:52:47 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:47.209887 | orchestrator | 2025-06-11 14:52:47 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:47.209908 | orchestrator | 2025-06-11 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:50.264738 | orchestrator | 2025-06-11 14:52:50 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:50.266234 | orchestrator | 2025-06-11 14:52:50 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:50.266282 | orchestrator | 2025-06-11 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:53.319654 | orchestrator | 2025-06-11 14:52:53 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:53.322339 | orchestrator | 2025-06-11 14:52:53 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:53.322401 | orchestrator | 2025-06-11 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:56.367063 | orchestrator | 2025-06-11 14:52:56 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:56.369588 | orchestrator | 2025-06-11 14:52:56 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:56.369634 | orchestrator | 2025-06-11 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:52:59.437046 | orchestrator | 2025-06-11 14:52:59 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:52:59.441072 | orchestrator | 2025-06-11 14:52:59 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:52:59.441331 | orchestrator | 2025-06-11 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:02.487964 | orchestrator | 2025-06-11 14:53:02 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:02.489080 | orchestrator | 2025-06-11 14:53:02 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:02.489578 | orchestrator | 2025-06-11 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:05.537255 | orchestrator | 2025-06-11 14:53:05 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:05.541010 | orchestrator | 2025-06-11 14:53:05 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:05.541044 | orchestrator | 2025-06-11 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:08.597556 | orchestrator | 2025-06-11 14:53:08 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:08.599154 | orchestrator | 2025-06-11 14:53:08 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:08.599191 | orchestrator | 2025-06-11 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:11.649222 | orchestrator | 2025-06-11 14:53:11 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:11.650983 | orchestrator | 2025-06-11 14:53:11 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:11.651025 | orchestrator | 2025-06-11 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:14.704706 | orchestrator | 2025-06-11 14:53:14 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:14.704820 | orchestrator | 2025-06-11 14:53:14 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:14.704907 | orchestrator | 2025-06-11 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:17.750859 | orchestrator | 2025-06-11 14:53:17 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:17.751635 | orchestrator | 2025-06-11 14:53:17 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:17.751923 | orchestrator | 2025-06-11 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:20.805980 | orchestrator | 2025-06-11 14:53:20 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:20.809841 | orchestrator | 2025-06-11 14:53:20 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:20.810316 | orchestrator | 2025-06-11 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:23.868114 | orchestrator | 2025-06-11 14:53:23 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:23.868226 | orchestrator | 2025-06-11 14:53:23 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:23.868241 | orchestrator | 2025-06-11 14:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:26.917236 | orchestrator | 2025-06-11 14:53:26 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:26.918682 | orchestrator | 2025-06-11 14:53:26 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:26.918722 | orchestrator | 2025-06-11 14:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:29.969706 | orchestrator | 2025-06-11 14:53:29 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:29.972340 | orchestrator | 2025-06-11 14:53:29 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:29.972389 | orchestrator | 2025-06-11 14:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:33.044591 | orchestrator | 2025-06-11 14:53:33 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:33.046286 | orchestrator | 2025-06-11 14:53:33 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:33.046604 | orchestrator | 2025-06-11 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:36.088477 | orchestrator | 2025-06-11 14:53:36 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:36.088927 | orchestrator | 2025-06-11 14:53:36 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:36.088956 | orchestrator | 2025-06-11 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:39.129369 | orchestrator | 2025-06-11 14:53:39 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:39.134156 | orchestrator | 2025-06-11 14:53:39 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:39.134207 | orchestrator | 2025-06-11 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:42.175565 | orchestrator | 2025-06-11 14:53:42 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:42.176766 | orchestrator | 2025-06-11 14:53:42 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:42.176799 | orchestrator | 2025-06-11 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:45.215084 | orchestrator | 2025-06-11 14:53:45 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:45.217645 | orchestrator | 2025-06-11 14:53:45 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:45.217706 | orchestrator | 2025-06-11 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:48.255535 | orchestrator | 2025-06-11 14:53:48 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:48.257550 | orchestrator | 2025-06-11 14:53:48 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:48.257641 | orchestrator | 2025-06-11 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:51.290939 | orchestrator | 2025-06-11 14:53:51 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:51.292637 | orchestrator | 2025-06-11 14:53:51 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:51.292670 | orchestrator | 2025-06-11 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:54.339206 | orchestrator | 2025-06-11 14:53:54 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:54.340249 | orchestrator | 2025-06-11 14:53:54 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:54.340748 | orchestrator | 2025-06-11 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:53:57.404956 | orchestrator | 2025-06-11 14:53:57 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:53:57.406108 | orchestrator | 2025-06-11 14:53:57 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:53:57.406145 | orchestrator | 2025-06-11 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:00.450853 | orchestrator | 2025-06-11 14:54:00 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:00.451982 | orchestrator | 2025-06-11 14:54:00 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:00.452606 | orchestrator | 2025-06-11 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:03.487208 | orchestrator | 2025-06-11 14:54:03 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:03.488889 | orchestrator | 2025-06-11 14:54:03 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:03.489221 | orchestrator | 2025-06-11 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:06.527646 | orchestrator | 2025-06-11 14:54:06 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:06.529028 | orchestrator | 2025-06-11 14:54:06 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:06.529065 | orchestrator | 2025-06-11 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:09.573834 | orchestrator | 2025-06-11 14:54:09 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:09.575858 | orchestrator | 2025-06-11 14:54:09 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:09.575902 | orchestrator | 2025-06-11 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:12.621096 | orchestrator | 2025-06-11 14:54:12 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:12.621295 | orchestrator | 2025-06-11 14:54:12 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:12.621970 | orchestrator | 2025-06-11 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:15.661096 | orchestrator | 2025-06-11 14:54:15 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state STARTED 2025-06-11 14:54:15.663600 | orchestrator | 2025-06-11 14:54:15 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:15.663877 | orchestrator | 2025-06-11 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:18.732490 | orchestrator | 2025-06-11 14:54:18 | INFO  | Task aa68503d-969f-4347-8f4b-e1d663cde8f7 is in state SUCCESS 2025-06-11 14:54:18.733443 | orchestrator | 2025-06-11 14:54:18.733476 | orchestrator | None 2025-06-11 14:54:18.733487 | orchestrator | 2025-06-11 14:54:18.733497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:54:18.733507 | orchestrator | 2025-06-11 14:54:18.733516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:54:18.733526 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:00.602) 0:00:00.602 ******** 2025-06-11 14:54:18.733535 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.733545 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.733554 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.733563 | orchestrator | 2025-06-11 14:54:18.733571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:54:18.733580 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:00.491) 0:00:01.093 ******** 2025-06-11 14:54:18.733608 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-11 14:54:18.733619 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-11 14:54:18.733628 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-11 14:54:18.733637 | orchestrator | 2025-06-11 14:54:18.733645 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-11 14:54:18.733654 | orchestrator | 2025-06-11 14:54:18.733663 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-11 14:54:18.733671 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:00.634) 0:00:01.728 ******** 2025-06-11 14:54:18.733680 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.733714 | orchestrator | 2025-06-11 14:54:18.733724 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-11 14:54:18.733732 | orchestrator | Wednesday 11 June 2025 14:47:58 +0000 (0:00:01.031) 0:00:02.759 ******** 2025-06-11 14:54:18.733741 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.733786 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.733796 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.733804 | orchestrator | 2025-06-11 14:54:18.733813 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-11 14:54:18.733849 | orchestrator | Wednesday 11 June 2025 14:47:59 +0000 (0:00:00.892) 0:00:03.652 ******** 2025-06-11 14:54:18.733858 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.733866 | orchestrator | 2025-06-11 14:54:18.733875 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-11 14:54:18.733883 | orchestrator | Wednesday 11 June 2025 14:48:00 +0000 (0:00:01.448) 0:00:05.100 ******** 2025-06-11 14:54:18.733942 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.733954 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.733964 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.733973 | orchestrator | 2025-06-11 14:54:18.733983 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-11 14:54:18.733992 | orchestrator | Wednesday 11 June 2025 14:48:01 +0000 (0:00:00.750) 0:00:05.850 ******** 2025-06-11 14:54:18.734002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734013 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734079 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-11 14:54:18.734122 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-11 14:54:18.734134 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-11 14:54:18.734144 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-11 14:54:18.734155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-11 14:54:18.734165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-11 14:54:18.734176 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-11 14:54:18.734187 | orchestrator | 2025-06-11 14:54:18.734197 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-11 14:54:18.734207 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:03.386) 0:00:09.236 ******** 2025-06-11 14:54:18.734217 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-11 14:54:18.734228 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-11 14:54:18.734238 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-11 14:54:18.734249 | orchestrator | 2025-06-11 14:54:18.734259 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-11 14:54:18.734270 | orchestrator | Wednesday 11 June 2025 14:48:05 +0000 (0:00:00.673) 0:00:09.910 ******** 2025-06-11 14:54:18.734280 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-11 14:54:18.734291 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-11 14:54:18.734301 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-11 14:54:18.734565 | orchestrator | 2025-06-11 14:54:18.734580 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-11 14:54:18.734602 | orchestrator | Wednesday 11 June 2025 14:48:07 +0000 (0:00:01.590) 0:00:11.500 ******** 2025-06-11 14:54:18.734611 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-11 14:54:18.734620 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.734642 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-11 14:54:18.734652 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.734660 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-11 14:54:18.734669 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.734678 | orchestrator | 2025-06-11 14:54:18.734687 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-11 14:54:18.734695 | orchestrator | Wednesday 11 June 2025 14:48:08 +0000 (0:00:00.893) 0:00:12.394 ******** 2025-06-11 14:54:18.734716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.734790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.734804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.734814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.734823 | orchestrator | 2025-06-11 14:54:18.734832 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-11 14:54:18.734841 | orchestrator | Wednesday 11 June 2025 14:48:10 +0000 (0:00:02.598) 0:00:14.992 ******** 2025-06-11 14:54:18.734849 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.734858 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.734867 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.734875 | orchestrator | 2025-06-11 14:54:18.734884 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-11 14:54:18.734893 | orchestrator | Wednesday 11 June 2025 14:48:12 +0000 (0:00:01.349) 0:00:16.342 ******** 2025-06-11 14:54:18.734901 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-11 14:54:18.734910 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-11 14:54:18.734919 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-11 14:54:18.734928 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-11 14:54:18.734936 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-11 14:54:18.734945 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-11 14:54:18.734954 | orchestrator | 2025-06-11 14:54:18.734962 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-11 14:54:18.734971 | orchestrator | Wednesday 11 June 2025 14:48:14 +0000 (0:00:02.133) 0:00:18.475 ******** 2025-06-11 14:54:18.734980 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.734988 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.734997 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.735005 | orchestrator | 2025-06-11 14:54:18.735014 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-11 14:54:18.735029 | orchestrator | Wednesday 11 June 2025 14:48:15 +0000 (0:00:01.493) 0:00:19.968 ******** 2025-06-11 14:54:18.735038 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.735046 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.735055 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.735064 | orchestrator | 2025-06-11 14:54:18.735162 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-11 14:54:18.735171 | orchestrator | Wednesday 11 June 2025 14:48:17 +0000 (0:00:01.805) 0:00:21.774 ******** 2025-06-11 14:54:18.735180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.735199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.735214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735234 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.735243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.735252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.735294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735314 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.735387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.735403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.735412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735442 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.735451 | orchestrator | 2025-06-11 14:54:18.735460 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-11 14:54:18.735558 | orchestrator | Wednesday 11 June 2025 14:48:19 +0000 (0:00:01.938) 0:00:23.713 ******** 2025-06-11 14:54:18.735567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.735710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e', '__omit_place_holder__d22cb888a3ffc116860a233c4a078e740cf7ae8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-11 14:54:18.735732 | orchestrator | 2025-06-11 14:54:18.735741 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-11 14:54:18.735749 | orchestrator | Wednesday 11 June 2025 14:48:23 +0000 (0:00:03.708) 0:00:27.421 ******** 2025-06-11 14:54:18.735759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.735953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.735964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.735973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.735982 | orchestrator | 2025-06-11 14:54:18.735990 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-11 14:54:18.735999 | orchestrator | Wednesday 11 June 2025 14:48:27 +0000 (0:00:04.349) 0:00:31.771 ******** 2025-06-11 14:54:18.736008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-11 14:54:18.736084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-11 14:54:18.736094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-11 14:54:18.736103 | orchestrator | 2025-06-11 14:54:18.736111 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-11 14:54:18.736120 | orchestrator | Wednesday 11 June 2025 14:48:30 +0000 (0:00:02.798) 0:00:34.569 ******** 2025-06-11 14:54:18.736128 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-11 14:54:18.736137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-11 14:54:18.736155 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-11 14:54:18.736165 | orchestrator | 2025-06-11 14:54:18.736173 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-11 14:54:18.736182 | orchestrator | Wednesday 11 June 2025 14:48:33 +0000 (0:00:03.424) 0:00:37.993 ******** 2025-06-11 14:54:18.736190 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.736199 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.736208 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.736216 | orchestrator | 2025-06-11 14:54:18.736224 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-11 14:54:18.736233 | orchestrator | Wednesday 11 June 2025 14:48:34 +0000 (0:00:00.773) 0:00:38.767 ******** 2025-06-11 14:54:18.736247 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-11 14:54:18.736262 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-11 14:54:18.736271 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-11 14:54:18.736280 | orchestrator | 2025-06-11 14:54:18.736288 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-11 14:54:18.736297 | orchestrator | Wednesday 11 June 2025 14:48:37 +0000 (0:00:03.257) 0:00:42.024 ******** 2025-06-11 14:54:18.736306 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-11 14:54:18.736314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-11 14:54:18.736323 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-11 14:54:18.736332 | orchestrator | 2025-06-11 14:54:18.736340 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-11 14:54:18.736349 | orchestrator | Wednesday 11 June 2025 14:48:39 +0000 (0:00:02.104) 0:00:44.129 ******** 2025-06-11 14:54:18.736357 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-11 14:54:18.736382 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-11 14:54:18.736391 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-11 14:54:18.736434 | orchestrator | 2025-06-11 14:54:18.736443 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-11 14:54:18.736452 | orchestrator | Wednesday 11 June 2025 14:48:41 +0000 (0:00:01.593) 0:00:45.722 ******** 2025-06-11 14:54:18.736460 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-11 14:54:18.736469 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-11 14:54:18.736477 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-11 14:54:18.736486 | orchestrator | 2025-06-11 14:54:18.736510 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-11 14:54:18.736519 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:01.648) 0:00:47.370 ******** 2025-06-11 14:54:18.736527 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.736536 | orchestrator | 2025-06-11 14:54:18.736544 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-11 14:54:18.736553 | orchestrator | Wednesday 11 June 2025 14:48:44 +0000 (0:00:00.943) 0:00:48.313 ******** 2025-06-11 14:54:18.736562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.736732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.736741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.736750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.736766 | orchestrator | 2025-06-11 14:54:18.736775 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-11 14:54:18.736784 | orchestrator | Wednesday 11 June 2025 14:48:47 +0000 (0:00:03.487) 0:00:51.801 ******** 2025-06-11 14:54:18.736802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.736816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.736826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.736834 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.736844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.736853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.736862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.736876 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.736886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.736905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.736915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.736924 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.736933 | orchestrator | 2025-06-11 14:54:18.736941 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-11 14:54:18.736950 | orchestrator | Wednesday 11 June 2025 14:48:48 +0000 (0:00:00.587) 0:00:52.388 ******** 2025-06-11 14:54:18.736959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.736968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.736977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.736992 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.737001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737105 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.737114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737141 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.737236 | orchestrator | 2025-06-11 14:54:18.737247 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-11 14:54:18.737342 | orchestrator | Wednesday 11 June 2025 14:48:49 +0000 (0:00:01.291) 0:00:53.680 ******** 2025-06-11 14:54:18.737353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737498 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.737510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737537 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.737546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737592 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.737601 | orchestrator | 2025-06-11 14:54:18.737609 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-11 14:54:18.737618 | orchestrator | Wednesday 11 June 2025 14:48:51 +0000 (0:00:01.984) 0:00:55.664 ******** 2025-06-11 14:54:18.737632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737659 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.737674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737701 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.737717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737749 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.737757 | orchestrator | 2025-06-11 14:54:18.737766 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-11 14:54:18.737775 | orchestrator | Wednesday 11 June 2025 14:48:52 +0000 (0:00:01.002) 0:00:56.666 ******** 2025-06-11 14:54:18.737789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737816 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.737832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737865 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.737874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.737925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.737935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.737944 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.737984 | orchestrator | 2025-06-11 14:54:18.737994 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-11 14:54:18.738003 | orchestrator | Wednesday 11 June 2025 14:48:53 +0000 (0:00:01.172) 0:00:57.838 ******** 2025-06-11 14:54:18.738134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738262 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.738271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738306 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.738315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738352 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.738378 | orchestrator | 2025-06-11 14:54:18.738388 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-11 14:54:18.738397 | orchestrator | Wednesday 11 June 2025 14:48:54 +0000 (0:00:00.718) 0:00:58.557 ******** 2025-06-11 14:54:18.738406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738439 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.738447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738483 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.738496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738596 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.738605 | orchestrator | 2025-06-11 14:54:18.738614 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-11 14:54:18.738623 | orchestrator | Wednesday 11 June 2025 14:48:54 +0000 (0:00:00.590) 0:00:59.148 ******** 2025-06-11 14:54:18.738632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738659 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.738691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738726 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.738762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-11 14:54:18.738771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-11 14:54:18.738780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-11 14:54:18.738789 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.738797 | orchestrator | 2025-06-11 14:54:18.738806 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-11 14:54:18.738815 | orchestrator | Wednesday 11 June 2025 14:48:56 +0000 (0:00:01.202) 0:01:00.351 ******** 2025-06-11 14:54:18.738823 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-11 14:54:18.738833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-11 14:54:18.738847 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-11 14:54:18.738866 | orchestrator | 2025-06-11 14:54:18.738875 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-11 14:54:18.738920 | orchestrator | Wednesday 11 June 2025 14:48:57 +0000 (0:00:01.585) 0:01:01.936 ******** 2025-06-11 14:54:18.738929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-11 14:54:18.738938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-11 14:54:18.738946 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-11 14:54:18.738955 | orchestrator | 2025-06-11 14:54:18.738986 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-11 14:54:18.738995 | orchestrator | Wednesday 11 June 2025 14:48:59 +0000 (0:00:01.438) 0:01:03.375 ******** 2025-06-11 14:54:18.739003 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 14:54:18.739012 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 14:54:18.739021 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.739051 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 14:54:18.739060 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 14:54:18.739069 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.739077 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 14:54:18.739086 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 14:54:18.739094 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.739103 | orchestrator | 2025-06-11 14:54:18.739111 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-11 14:54:18.739120 | orchestrator | Wednesday 11 June 2025 14:49:01 +0000 (0:00:02.340) 0:01:05.715 ******** 2025-06-11 14:54:18.739129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-11 14:54:18.739199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.739209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.739218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-11 14:54:18.739226 | orchestrator | 2025-06-11 14:54:18.739235 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-11 14:54:18.739244 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:02.914) 0:01:08.630 ******** 2025-06-11 14:54:18.739253 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.739266 | orchestrator | 2025-06-11 14:54:18.739275 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-11 14:54:18.739284 | orchestrator | Wednesday 11 June 2025 14:49:05 +0000 (0:00:00.791) 0:01:09.422 ******** 2025-06-11 14:54:18.739300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-11 14:54:18.739314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-11 14:54:18.739351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-11 14:54:18.739453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739486 | orchestrator | 2025-06-11 14:54:18.739495 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-11 14:54:18.739503 | orchestrator | Wednesday 11 June 2025 14:49:09 +0000 (0:00:04.304) 0:01:13.726 ******** 2025-06-11 14:54:18.739513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-11 14:54:18.739528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739555 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.739564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-11 14:54:18.739573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739628 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.739648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-11 14:54:18.739657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.739666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.739690 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.739698 | orchestrator | 2025-06-11 14:54:18.739707 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-11 14:54:18.739716 | orchestrator | Wednesday 11 June 2025 14:49:10 +0000 (0:00:01.043) 0:01:14.770 ******** 2025-06-11 14:54:18.739725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739743 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.739752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739770 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.739778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-11 14:54:18.739795 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.739804 | orchestrator | 2025-06-11 14:54:18.739818 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-11 14:54:18.739827 | orchestrator | Wednesday 11 June 2025 14:49:12 +0000 (0:00:02.134) 0:01:16.908 ******** 2025-06-11 14:54:18.739836 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.739844 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.739853 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.739861 | orchestrator | 2025-06-11 14:54:18.739870 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-11 14:54:18.739878 | orchestrator | Wednesday 11 June 2025 14:49:14 +0000 (0:00:01.698) 0:01:18.606 ******** 2025-06-11 14:54:18.739887 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.739895 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.739904 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.739912 | orchestrator | 2025-06-11 14:54:18.739925 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-11 14:54:18.739933 | orchestrator | Wednesday 11 June 2025 14:49:16 +0000 (0:00:02.462) 0:01:21.069 ******** 2025-06-11 14:54:18.739941 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.739949 | orchestrator | 2025-06-11 14:54:18.739957 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-11 14:54:18.739965 | orchestrator | Wednesday 11 June 2025 14:49:17 +0000 (0:00:00.466) 0:01:21.535 ******** 2025-06-11 14:54:18.739974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.740086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.740128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.740193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740209 | orchestrator | 2025-06-11 14:54:18.740217 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-11 14:54:18.740225 | orchestrator | Wednesday 11 June 2025 14:49:23 +0000 (0:00:06.071) 0:01:27.607 ******** 2025-06-11 14:54:18.740240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.740254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740276 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.740284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.740293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.740329 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.740342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.740359 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.740408 | orchestrator | 2025-06-11 14:54:18.740417 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-11 14:54:18.740425 | orchestrator | Wednesday 11 June 2025 14:49:24 +0000 (0:00:00.821) 0:01:28.428 ******** 2025-06-11 14:54:18.740433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740451 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.740567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740586 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.740594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-11 14:54:18.740609 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.740617 | orchestrator | 2025-06-11 14:54:18.740625 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-11 14:54:18.740633 | orchestrator | Wednesday 11 June 2025 14:49:25 +0000 (0:00:01.209) 0:01:29.637 ******** 2025-06-11 14:54:18.740641 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.740648 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.740656 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.740664 | orchestrator | 2025-06-11 14:54:18.740671 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-11 14:54:18.740678 | orchestrator | Wednesday 11 June 2025 14:49:26 +0000 (0:00:01.432) 0:01:31.070 ******** 2025-06-11 14:54:18.740685 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.740697 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.740703 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.740710 | orchestrator | 2025-06-11 14:54:18.740722 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-11 14:54:18.740728 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:01.940) 0:01:33.010 ******** 2025-06-11 14:54:18.740735 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.740742 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.740748 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.740755 | orchestrator | 2025-06-11 14:54:18.740761 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-11 14:54:18.740768 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:00.296) 0:01:33.307 ******** 2025-06-11 14:54:18.740774 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.740781 | orchestrator | 2025-06-11 14:54:18.740792 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-11 14:54:18.740798 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:00.648) 0:01:33.956 ******** 2025-06-11 14:54:18.740806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-11 14:54:18.740814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-11 14:54:18.740821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-11 14:54:18.740828 | orchestrator | 2025-06-11 14:54:18.740835 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-11 14:54:18.740841 | orchestrator | Wednesday 11 June 2025 14:49:32 +0000 (0:00:02.961) 0:01:36.917 ******** 2025-06-11 14:54:18.740859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-11 14:54:18.740870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-11 14:54:18.740877 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.740884 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.740891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-11 14:54:18.740898 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.740904 | orchestrator | 2025-06-11 14:54:18.740911 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-11 14:54:18.740917 | orchestrator | Wednesday 11 June 2025 14:49:33 +0000 (0:00:01.207) 0:01:38.125 ******** 2025-06-11 14:54:18.740926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740943 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.740950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740969 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.740981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-11 14:54:18.740999 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741006 | orchestrator | 2025-06-11 14:54:18.741012 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-11 14:54:18.741019 | orchestrator | Wednesday 11 June 2025 14:49:35 +0000 (0:00:01.667) 0:01:39.792 ******** 2025-06-11 14:54:18.741026 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741032 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741039 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741045 | orchestrator | 2025-06-11 14:54:18.741052 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-11 14:54:18.741058 | orchestrator | Wednesday 11 June 2025 14:49:36 +0000 (0:00:00.922) 0:01:40.715 ******** 2025-06-11 14:54:18.741065 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741071 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741078 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741084 | orchestrator | 2025-06-11 14:54:18.741091 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-11 14:54:18.741097 | orchestrator | Wednesday 11 June 2025 14:49:37 +0000 (0:00:01.225) 0:01:41.941 ******** 2025-06-11 14:54:18.741104 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.741111 | orchestrator | 2025-06-11 14:54:18.741117 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-11 14:54:18.741124 | orchestrator | Wednesday 11 June 2025 14:49:38 +0000 (0:00:00.722) 0:01:42.663 ******** 2025-06-11 14:54:18.741131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.741143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.741181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.741200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741254 | orchestrator | 2025-06-11 14:54:18.741261 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-11 14:54:18.741268 | orchestrator | Wednesday 11 June 2025 14:49:42 +0000 (0:00:03.600) 0:01:46.264 ******** 2025-06-11 14:54:18.741274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.741282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741312 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.741330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741356 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.741387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741415 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741422 | orchestrator | 2025-06-11 14:54:18.741428 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-11 14:54:18.741435 | orchestrator | Wednesday 11 June 2025 14:49:43 +0000 (0:00:01.241) 0:01:47.506 ******** 2025-06-11 14:54:18.741442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741460 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741491 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-11 14:54:18.741504 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741517 | orchestrator | 2025-06-11 14:54:18.741523 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-11 14:54:18.741530 | orchestrator | Wednesday 11 June 2025 14:49:44 +0000 (0:00:00.982) 0:01:48.489 ******** 2025-06-11 14:54:18.741536 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.741543 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.741549 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.741556 | orchestrator | 2025-06-11 14:54:18.741562 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-11 14:54:18.741569 | orchestrator | Wednesday 11 June 2025 14:49:45 +0000 (0:00:01.232) 0:01:49.721 ******** 2025-06-11 14:54:18.741575 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.741582 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.741588 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.741595 | orchestrator | 2025-06-11 14:54:18.741601 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-11 14:54:18.741608 | orchestrator | Wednesday 11 June 2025 14:49:47 +0000 (0:00:01.893) 0:01:51.614 ******** 2025-06-11 14:54:18.741615 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741621 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741628 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741652 | orchestrator | 2025-06-11 14:54:18.741659 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-11 14:54:18.741666 | orchestrator | Wednesday 11 June 2025 14:49:47 +0000 (0:00:00.507) 0:01:52.122 ******** 2025-06-11 14:54:18.741672 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.741679 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.741686 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.741692 | orchestrator | 2025-06-11 14:54:18.741699 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-11 14:54:18.741705 | orchestrator | Wednesday 11 June 2025 14:49:48 +0000 (0:00:00.337) 0:01:52.460 ******** 2025-06-11 14:54:18.741712 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.741719 | orchestrator | 2025-06-11 14:54:18.741725 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-11 14:54:18.741732 | orchestrator | Wednesday 11 June 2025 14:49:49 +0000 (0:00:01.058) 0:01:53.518 ******** 2025-06-11 14:54:18.741739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 14:54:18.741751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.741763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 14:54:18.741807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.741821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 14:54:18.741904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.741911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.741951 | orchestrator | 2025-06-11 14:54:18.741957 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-11 14:54:18.741964 | orchestrator | Wednesday 11 June 2025 14:49:53 +0000 (0:00:03.899) 0:01:57.417 ******** 2025-06-11 14:54:18.741984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 14:54:18.741992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.741999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 14:54:18.742080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.742094 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742149 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 14:54:18.742164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 14:54:18.742171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.742218 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742225 | orchestrator | 2025-06-11 14:54:18.742232 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-11 14:54:18.742239 | orchestrator | Wednesday 11 June 2025 14:49:54 +0000 (0:00:00.818) 0:01:58.236 ******** 2025-06-11 14:54:18.742246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742261 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742281 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-11 14:54:18.742301 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742312 | orchestrator | 2025-06-11 14:54:18.742319 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-11 14:54:18.742326 | orchestrator | Wednesday 11 June 2025 14:49:55 +0000 (0:00:01.066) 0:01:59.302 ******** 2025-06-11 14:54:18.742332 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.742339 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.742345 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.742352 | orchestrator | 2025-06-11 14:54:18.742358 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-11 14:54:18.742376 | orchestrator | Wednesday 11 June 2025 14:49:56 +0000 (0:00:01.600) 0:02:00.903 ******** 2025-06-11 14:54:18.742383 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.742390 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.742396 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.742403 | orchestrator | 2025-06-11 14:54:18.742409 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-11 14:54:18.742416 | orchestrator | Wednesday 11 June 2025 14:49:58 +0000 (0:00:01.966) 0:02:02.869 ******** 2025-06-11 14:54:18.742423 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742429 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742436 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742442 | orchestrator | 2025-06-11 14:54:18.742449 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-11 14:54:18.742456 | orchestrator | Wednesday 11 June 2025 14:49:58 +0000 (0:00:00.302) 0:02:03.172 ******** 2025-06-11 14:54:18.742462 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.742469 | orchestrator | 2025-06-11 14:54:18.742475 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-11 14:54:18.742482 | orchestrator | Wednesday 11 June 2025 14:49:59 +0000 (0:00:00.757) 0:02:03.929 ******** 2025-06-11 14:54:18.742499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 14:54:18.742508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 14:54:18.742553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 14:54:18.742597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742609 | orchestrator | 2025-06-11 14:54:18.742616 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-11 14:54:18.742623 | orchestrator | Wednesday 11 June 2025 14:50:03 +0000 (0:00:04.102) 0:02:08.032 ******** 2025-06-11 14:54:18.742648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 14:54:18.742661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742676 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 14:54:18.742711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742723 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 14:54:18.742749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.742757 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742763 | orchestrator | 2025-06-11 14:54:18.742770 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-11 14:54:18.742777 | orchestrator | Wednesday 11 June 2025 14:50:06 +0000 (0:00:02.866) 0:02:10.899 ******** 2025-06-11 14:54:18.742790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742804 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742833 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-11 14:54:18.742852 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742859 | orchestrator | 2025-06-11 14:54:18.742865 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-11 14:54:18.742872 | orchestrator | Wednesday 11 June 2025 14:50:09 +0000 (0:00:03.162) 0:02:14.061 ******** 2025-06-11 14:54:18.742882 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.742889 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.742895 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.742902 | orchestrator | 2025-06-11 14:54:18.742909 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-11 14:54:18.742921 | orchestrator | Wednesday 11 June 2025 14:50:11 +0000 (0:00:01.542) 0:02:15.603 ******** 2025-06-11 14:54:18.742928 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.742935 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.742941 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.742948 | orchestrator | 2025-06-11 14:54:18.742955 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-11 14:54:18.742961 | orchestrator | Wednesday 11 June 2025 14:50:13 +0000 (0:00:01.983) 0:02:17.587 ******** 2025-06-11 14:54:18.742968 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.742974 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.742981 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.742987 | orchestrator | 2025-06-11 14:54:18.742994 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-11 14:54:18.743000 | orchestrator | Wednesday 11 June 2025 14:50:13 +0000 (0:00:00.332) 0:02:17.919 ******** 2025-06-11 14:54:18.743007 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.743013 | orchestrator | 2025-06-11 14:54:18.743020 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-11 14:54:18.743026 | orchestrator | Wednesday 11 June 2025 14:50:14 +0000 (0:00:00.827) 0:02:18.747 ******** 2025-06-11 14:54:18.743048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 14:54:18.743056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 14:54:18.743063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 14:54:18.743070 | orchestrator | 2025-06-11 14:54:18.743077 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-11 14:54:18.743083 | orchestrator | Wednesday 11 June 2025 14:50:17 +0000 (0:00:03.388) 0:02:22.136 ******** 2025-06-11 14:54:18.743096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 14:54:18.743112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 14:54:18.743119 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743126 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 14:54:18.743140 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743146 | orchestrator | 2025-06-11 14:54:18.743153 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-11 14:54:18.743160 | orchestrator | Wednesday 11 June 2025 14:50:18 +0000 (0:00:00.404) 0:02:22.541 ******** 2025-06-11 14:54:18.743166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743180 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743200 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-11 14:54:18.743220 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743227 | orchestrator | 2025-06-11 14:54:18.743233 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-11 14:54:18.743240 | orchestrator | Wednesday 11 June 2025 14:50:18 +0000 (0:00:00.629) 0:02:23.170 ******** 2025-06-11 14:54:18.743247 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.743258 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.743265 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.743271 | orchestrator | 2025-06-11 14:54:18.743278 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-11 14:54:18.743284 | orchestrator | Wednesday 11 June 2025 14:50:20 +0000 (0:00:01.537) 0:02:24.708 ******** 2025-06-11 14:54:18.743291 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.743298 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.743304 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.743311 | orchestrator | 2025-06-11 14:54:18.743317 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-11 14:54:18.743324 | orchestrator | Wednesday 11 June 2025 14:50:22 +0000 (0:00:01.956) 0:02:26.665 ******** 2025-06-11 14:54:18.743331 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743337 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743348 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743355 | orchestrator | 2025-06-11 14:54:18.743400 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-11 14:54:18.743408 | orchestrator | Wednesday 11 June 2025 14:50:22 +0000 (0:00:00.304) 0:02:26.969 ******** 2025-06-11 14:54:18.743414 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.743421 | orchestrator | 2025-06-11 14:54:18.743428 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-11 14:54:18.743434 | orchestrator | Wednesday 11 June 2025 14:50:23 +0000 (0:00:00.894) 0:02:27.864 ******** 2025-06-11 14:54:18.743446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:54:18.743465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:54:18.743479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:54:18.743491 | orchestrator | 2025-06-11 14:54:18.743498 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-11 14:54:18.743505 | orchestrator | Wednesday 11 June 2025 14:50:28 +0000 (0:00:05.075) 0:02:32.939 ******** 2025-06-11 14:54:18.743522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:54:18.743531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:54:18.743546 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743553 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:54:18.743579 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743585 | orchestrator | 2025-06-11 14:54:18.743592 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-11 14:54:18.743599 | orchestrator | Wednesday 11 June 2025 14:50:29 +0000 (0:00:00.497) 0:02:33.437 ******** 2025-06-11 14:54:18.743606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-11 14:54:18.743647 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-11 14:54:18.743693 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-11 14:54:18.743724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-11 14:54:18.743731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-11 14:54:18.743743 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743750 | orchestrator | 2025-06-11 14:54:18.743756 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-11 14:54:18.743763 | orchestrator | Wednesday 11 June 2025 14:50:30 +0000 (0:00:00.961) 0:02:34.399 ******** 2025-06-11 14:54:18.743770 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.743776 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.743783 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.743789 | orchestrator | 2025-06-11 14:54:18.743796 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-11 14:54:18.743803 | orchestrator | Wednesday 11 June 2025 14:50:31 +0000 (0:00:01.429) 0:02:35.829 ******** 2025-06-11 14:54:18.743809 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.743816 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.743823 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.743829 | orchestrator | 2025-06-11 14:54:18.743836 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-11 14:54:18.743842 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:01.809) 0:02:37.639 ******** 2025-06-11 14:54:18.743849 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743856 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743862 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743869 | orchestrator | 2025-06-11 14:54:18.743875 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-11 14:54:18.743882 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.354) 0:02:37.993 ******** 2025-06-11 14:54:18.743889 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.743895 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.743902 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.743908 | orchestrator | 2025-06-11 14:54:18.743915 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-11 14:54:18.743922 | orchestrator | Wednesday 11 June 2025 14:50:34 +0000 (0:00:00.325) 0:02:38.318 ******** 2025-06-11 14:54:18.743928 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.743935 | orchestrator | 2025-06-11 14:54:18.743941 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-11 14:54:18.743947 | orchestrator | Wednesday 11 June 2025 14:50:35 +0000 (0:00:01.150) 0:02:39.469 ******** 2025-06-11 14:54:18.743958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 14:54:18.743969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.743981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.743989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 14:54:18.743996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.744003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.744018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 14:54:18.744029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.744036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.744043 | orchestrator | 2025-06-11 14:54:18.744049 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-11 14:54:18.744055 | orchestrator | Wednesday 11 June 2025 14:50:39 +0000 (0:00:04.163) 0:02:43.632 ******** 2025-06-11 14:54:18.744062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 14:54:18.744069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.744083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 14:54:18.744095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.744102 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.744108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.744115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.744121 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.744128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 14:54:18.744138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 14:54:18.744149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 14:54:18.744159 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.744166 | orchestrator | 2025-06-11 14:54:18.744172 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-11 14:54:18.744178 | orchestrator | Wednesday 11 June 2025 14:50:40 +0000 (0:00:00.827) 0:02:44.460 ******** 2025-06-11 14:54:18.744185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744204 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.744211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744217 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.744223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-11 14:54:18.744236 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.744242 | orchestrator | 2025-06-11 14:54:18.744249 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-11 14:54:18.744255 | orchestrator | Wednesday 11 June 2025 14:50:41 +0000 (0:00:01.344) 0:02:45.804 ******** 2025-06-11 14:54:18.744261 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.744267 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.744273 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.744279 | orchestrator | 2025-06-11 14:54:18.744285 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-11 14:54:18.744291 | orchestrator | Wednesday 11 June 2025 14:50:42 +0000 (0:00:01.345) 0:02:47.149 ******** 2025-06-11 14:54:18.744297 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.744303 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.744309 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.744315 | orchestrator | 2025-06-11 14:54:18.744321 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-11 14:54:18.744328 | orchestrator | Wednesday 11 June 2025 14:50:45 +0000 (0:00:02.331) 0:02:49.480 ******** 2025-06-11 14:54:18.744334 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.744340 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.744354 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.744360 | orchestrator | 2025-06-11 14:54:18.744376 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-11 14:54:18.744382 | orchestrator | Wednesday 11 June 2025 14:50:45 +0000 (0:00:00.326) 0:02:49.807 ******** 2025-06-11 14:54:18.744388 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.744394 | orchestrator | 2025-06-11 14:54:18.744400 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-11 14:54:18.744407 | orchestrator | Wednesday 11 June 2025 14:50:46 +0000 (0:00:01.286) 0:02:51.093 ******** 2025-06-11 14:54:18.744422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 14:54:18.744429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 14:54:18.744443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})2025-06-11 14:54:18 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:18.744468 | orchestrator | 2025-06-11 14:54:18 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:18.744474 | orchestrator | 2025-06-11 14:54:18 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:18.744481 | orchestrator | 2025-06-11 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:18.744487 | orchestrator | [0m 2025-06-11 14:54:18.744497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744504 | orchestrator | 2025-06-11 14:54:18.744510 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-11 14:54:18.744517 | orchestrator | Wednesday 11 June 2025 14:50:50 +0000 (0:00:03.742) 0:02:54.836 ******** 2025-06-11 14:54:18.744523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 14:54:18.744530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744541 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.744547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 14:54:18.744561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744568 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.744574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 14:54:18.744581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.744587 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.744593 | orchestrator | 2025-06-11 14:54:18.744600 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-11 14:54:18.744606 | orchestrator | Wednesday 11 June 2025 14:50:51 +0000 (0:00:00.943) 0:02:55.779 ******** 2025-06-11 14:54:18.744797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744822 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.744828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744840 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.744846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-11 14:54:18.744858 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.744864 | orchestrator | 2025-06-11 14:54:18.744871 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-11 14:54:18.744877 | orchestrator | Wednesday 11 June 2025 14:50:53 +0000 (0:00:01.733) 0:02:57.513 ******** 2025-06-11 14:54:18.744883 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.744889 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.744895 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.744901 | orchestrator | 2025-06-11 14:54:18.744907 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-11 14:54:18.744913 | orchestrator | Wednesday 11 June 2025 14:50:54 +0000 (0:00:01.425) 0:02:58.938 ******** 2025-06-11 14:54:18.744919 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.744925 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.744931 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.744937 | orchestrator | 2025-06-11 14:54:18.744943 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-11 14:54:18.744949 | orchestrator | Wednesday 11 June 2025 14:50:57 +0000 (0:00:02.430) 0:03:01.368 ******** 2025-06-11 14:54:18.744955 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.744961 | orchestrator | 2025-06-11 14:54:18.744967 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-11 14:54:18.744978 | orchestrator | Wednesday 11 June 2025 14:50:58 +0000 (0:00:01.304) 0:03:02.673 ******** 2025-06-11 14:54:18.744984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-11 14:54:18.744991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-11 14:54:18.745031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-11 14:54:18.745044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745085 | orchestrator | 2025-06-11 14:54:18.745094 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-11 14:54:18.745101 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:03.623) 0:03:06.296 ******** 2025-06-11 14:54:18.745107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-11 14:54:18.745118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745140 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-11 14:54:18.745157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745196 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-11 14:54:18.745212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.745232 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745238 | orchestrator | 2025-06-11 14:54:18.745244 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-11 14:54:18.745251 | orchestrator | Wednesday 11 June 2025 14:51:02 +0000 (0:00:00.696) 0:03:06.993 ******** 2025-06-11 14:54:18.745257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745273 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745292 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-11 14:54:18.745311 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745317 | orchestrator | 2025-06-11 14:54:18.745323 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-11 14:54:18.745329 | orchestrator | Wednesday 11 June 2025 14:51:03 +0000 (0:00:01.101) 0:03:08.095 ******** 2025-06-11 14:54:18.745336 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.745342 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.745348 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.745354 | orchestrator | 2025-06-11 14:54:18.745360 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-11 14:54:18.745408 | orchestrator | Wednesday 11 June 2025 14:51:05 +0000 (0:00:01.296) 0:03:09.392 ******** 2025-06-11 14:54:18.745419 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.745426 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.745433 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.745440 | orchestrator | 2025-06-11 14:54:18.745447 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-11 14:54:18.745454 | orchestrator | Wednesday 11 June 2025 14:51:07 +0000 (0:00:02.074) 0:03:11.466 ******** 2025-06-11 14:54:18.745461 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.745468 | orchestrator | 2025-06-11 14:54:18.745489 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-11 14:54:18.745496 | orchestrator | Wednesday 11 June 2025 14:51:08 +0000 (0:00:01.068) 0:03:12.535 ******** 2025-06-11 14:54:18.745504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-11 14:54:18.745511 | orchestrator | 2025-06-11 14:54:18.745518 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-11 14:54:18.745525 | orchestrator | Wednesday 11 June 2025 14:51:11 +0000 (0:00:03.244) 0:03:15.779 ******** 2025-06-11 14:54:18.745540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745562 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745594 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745621 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745628 | orchestrator | 2025-06-11 14:54:18.745635 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-11 14:54:18.745644 | orchestrator | Wednesday 11 June 2025 14:51:13 +0000 (0:00:02.259) 0:03:18.039 ******** 2025-06-11 14:54:18.745652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745700 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745707 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:54:18.745731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-11 14:54:18.745738 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745744 | orchestrator | 2025-06-11 14:54:18.745751 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-11 14:54:18.745757 | orchestrator | Wednesday 11 June 2025 14:51:16 +0000 (0:00:02.248) 0:03:20.287 ******** 2025-06-11 14:54:18.745766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745780 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745804 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-11 14:54:18.745828 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745834 | orchestrator | 2025-06-11 14:54:18.745840 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-11 14:54:18.745846 | orchestrator | Wednesday 11 June 2025 14:51:18 +0000 (0:00:02.600) 0:03:22.888 ******** 2025-06-11 14:54:18.745852 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.745858 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.745864 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.745870 | orchestrator | 2025-06-11 14:54:18.745876 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-11 14:54:18.745881 | orchestrator | Wednesday 11 June 2025 14:51:20 +0000 (0:00:01.719) 0:03:24.608 ******** 2025-06-11 14:54:18.745886 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745892 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745897 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745902 | orchestrator | 2025-06-11 14:54:18.745908 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-11 14:54:18.745913 | orchestrator | Wednesday 11 June 2025 14:51:21 +0000 (0:00:01.394) 0:03:26.002 ******** 2025-06-11 14:54:18.745918 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.745924 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.745929 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.745934 | orchestrator | 2025-06-11 14:54:18.745939 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-11 14:54:18.745950 | orchestrator | Wednesday 11 June 2025 14:51:22 +0000 (0:00:00.324) 0:03:26.326 ******** 2025-06-11 14:54:18.745958 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.745963 | orchestrator | 2025-06-11 14:54:18.745969 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-11 14:54:18.745974 | orchestrator | Wednesday 11 June 2025 14:51:23 +0000 (0:00:01.319) 0:03:27.646 ******** 2025-06-11 14:54:18.745980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-11 14:54:18.745986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-11 14:54:18.745994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-11 14:54:18.746000 | orchestrator | 2025-06-11 14:54:18.746006 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-11 14:54:18.746011 | orchestrator | Wednesday 11 June 2025 14:51:24 +0000 (0:00:01.412) 0:03:29.058 ******** 2025-06-11 14:54:18.746052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-11 14:54:18.746058 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-11 14:54:18.746081 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.746087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-11 14:54:18.746093 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.746098 | orchestrator | 2025-06-11 14:54:18.746103 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-11 14:54:18.746109 | orchestrator | Wednesday 11 June 2025 14:51:25 +0000 (0:00:00.394) 0:03:29.453 ******** 2025-06-11 14:54:18.746115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-11 14:54:18.746122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-11 14:54:18.746127 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746132 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.746138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-11 14:54:18.746143 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.746149 | orchestrator | 2025-06-11 14:54:18.746158 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-11 14:54:18.746164 | orchestrator | Wednesday 11 June 2025 14:51:26 +0000 (0:00:00.834) 0:03:30.287 ******** 2025-06-11 14:54:18.746169 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746175 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.746180 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.746185 | orchestrator | 2025-06-11 14:54:18.746190 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-11 14:54:18.746196 | orchestrator | Wednesday 11 June 2025 14:51:26 +0000 (0:00:00.466) 0:03:30.754 ******** 2025-06-11 14:54:18.746201 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746206 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.746212 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.746217 | orchestrator | 2025-06-11 14:54:18.746222 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-11 14:54:18.746228 | orchestrator | Wednesday 11 June 2025 14:51:27 +0000 (0:00:01.315) 0:03:32.069 ******** 2025-06-11 14:54:18.746233 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746244 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.746249 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.746254 | orchestrator | 2025-06-11 14:54:18.746260 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-11 14:54:18.746265 | orchestrator | Wednesday 11 June 2025 14:51:28 +0000 (0:00:00.305) 0:03:32.375 ******** 2025-06-11 14:54:18.746270 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.746275 | orchestrator | 2025-06-11 14:54:18.746281 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-11 14:54:18.746286 | orchestrator | Wednesday 11 June 2025 14:51:29 +0000 (0:00:01.477) 0:03:33.853 ******** 2025-06-11 14:54:18.746304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 14:54:18.746310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 14:54:18.746346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.746356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.746422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.746520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 14:54:18.746587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.746596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.746646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.746721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746738 | orchestrator | 2025-06-11 14:54:18.746743 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-11 14:54:18.746748 | orchestrator | Wednesday 11 June 2025 14:51:33 +0000 (0:00:04.143) 0:03:37.996 ******** 2025-06-11 14:54:18.746757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 14:54:18.746763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.746796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 14:54:18.746805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.746862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.746948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.746959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.746984 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.746990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.746999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.747005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.747010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747024 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.747030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 14:54:18.747036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-11 14:54:18.747071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.747083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.747088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.747103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-11 14:54:18.747217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-11 14:54:18.747228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-11 14:54:18.747245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-11 14:54:18.747251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747257 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.747263 | orchestrator | 2025-06-11 14:54:18.747268 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-11 14:54:18.747278 | orchestrator | Wednesday 11 June 2025 14:51:35 +0000 (0:00:01.755) 0:03:39.752 ******** 2025-06-11 14:54:18.747284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747296 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.747304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747316 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.747321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-11 14:54:18.747332 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.747337 | orchestrator | 2025-06-11 14:54:18.747343 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-11 14:54:18.747348 | orchestrator | Wednesday 11 June 2025 14:51:37 +0000 (0:00:01.989) 0:03:41.742 ******** 2025-06-11 14:54:18.747354 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.747359 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.747379 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.747385 | orchestrator | 2025-06-11 14:54:18.747390 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-11 14:54:18.747396 | orchestrator | Wednesday 11 June 2025 14:51:38 +0000 (0:00:01.336) 0:03:43.078 ******** 2025-06-11 14:54:18.747401 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.747406 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.747412 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.747417 | orchestrator | 2025-06-11 14:54:18.747422 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-11 14:54:18.747428 | orchestrator | Wednesday 11 June 2025 14:51:40 +0000 (0:00:02.043) 0:03:45.122 ******** 2025-06-11 14:54:18.747433 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.747439 | orchestrator | 2025-06-11 14:54:18.747444 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-11 14:54:18.747450 | orchestrator | Wednesday 11 June 2025 14:51:42 +0000 (0:00:01.397) 0:03:46.519 ******** 2025-06-11 14:54:18.747458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747484 | orchestrator | 2025-06-11 14:54:18.747490 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-11 14:54:18.747495 | orchestrator | Wednesday 11 June 2025 14:51:45 +0000 (0:00:03.152) 0:03:49.672 ******** 2025-06-11 14:54:18.747500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747506 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.747515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747525 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.747531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747537 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.747542 | orchestrator | 2025-06-11 14:54:18.747547 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-11 14:54:18.747553 | orchestrator | Wednesday 11 June 2025 14:51:45 +0000 (0:00:00.496) 0:03:50.168 ******** 2025-06-11 14:54:18.747558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747571 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.747579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747590 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.747595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-11 14:54:18.747606 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.747612 | orchestrator | 2025-06-11 14:54:18.747617 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-11 14:54:18.747622 | orchestrator | Wednesday 11 June 2025 14:51:46 +0000 (0:00:00.963) 0:03:51.132 ******** 2025-06-11 14:54:18.747628 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.747633 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.747638 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.747644 | orchestrator | 2025-06-11 14:54:18.747649 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-11 14:54:18.747654 | orchestrator | Wednesday 11 June 2025 14:51:48 +0000 (0:00:01.304) 0:03:52.436 ******** 2025-06-11 14:54:18.747660 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.747665 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.747670 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.747676 | orchestrator | 2025-06-11 14:54:18.747681 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-11 14:54:18.747694 | orchestrator | Wednesday 11 June 2025 14:51:50 +0000 (0:00:02.030) 0:03:54.466 ******** 2025-06-11 14:54:18.747700 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.747705 | orchestrator | 2025-06-11 14:54:18.747710 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-11 14:54:18.747716 | orchestrator | Wednesday 11 June 2025 14:51:51 +0000 (0:00:01.279) 0:03:55.745 ******** 2025-06-11 14:54:18.747784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.747838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747875 | orchestrator | 2025-06-11 14:54:18.747881 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-11 14:54:18.747888 | orchestrator | Wednesday 11 June 2025 14:51:55 +0000 (0:00:04.314) 0:04:00.059 ******** 2025-06-11 14:54:18.747903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747923 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.747934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.747973 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.747983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.747990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.748001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.748008 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748014 | orchestrator | 2025-06-11 14:54:18.748020 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-11 14:54:18.748026 | orchestrator | Wednesday 11 June 2025 14:51:56 +0000 (0:00:00.638) 0:04:00.698 ******** 2025-06-11 14:54:18.748033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748064 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748099 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-11 14:54:18.748130 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748137 | orchestrator | 2025-06-11 14:54:18.748143 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-11 14:54:18.748149 | orchestrator | Wednesday 11 June 2025 14:51:57 +0000 (0:00:00.873) 0:04:01.571 ******** 2025-06-11 14:54:18.748155 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.748161 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.748167 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.748174 | orchestrator | 2025-06-11 14:54:18.748180 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-11 14:54:18.748186 | orchestrator | Wednesday 11 June 2025 14:51:58 +0000 (0:00:01.584) 0:04:03.156 ******** 2025-06-11 14:54:18.748192 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.748198 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.748203 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.748209 | orchestrator | 2025-06-11 14:54:18.748214 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-11 14:54:18.748224 | orchestrator | Wednesday 11 June 2025 14:52:00 +0000 (0:00:02.035) 0:04:05.191 ******** 2025-06-11 14:54:18.748229 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.748235 | orchestrator | 2025-06-11 14:54:18.748240 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-11 14:54:18.748248 | orchestrator | Wednesday 11 June 2025 14:52:02 +0000 (0:00:01.246) 0:04:06.438 ******** 2025-06-11 14:54:18.748254 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-11 14:54:18.748259 | orchestrator | 2025-06-11 14:54:18.748265 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-11 14:54:18.748270 | orchestrator | Wednesday 11 June 2025 14:52:03 +0000 (0:00:01.450) 0:04:07.888 ******** 2025-06-11 14:54:18.748276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-11 14:54:18.748282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-11 14:54:18.748288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-11 14:54:18.748293 | orchestrator | 2025-06-11 14:54:18.748299 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-11 14:54:18.748308 | orchestrator | Wednesday 11 June 2025 14:52:07 +0000 (0:00:03.872) 0:04:11.760 ******** 2025-06-11 14:54:18.748313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748319 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748330 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748345 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748350 | orchestrator | 2025-06-11 14:54:18.748356 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-11 14:54:18.748373 | orchestrator | Wednesday 11 June 2025 14:52:09 +0000 (0:00:01.936) 0:04:13.697 ******** 2025-06-11 14:54:18.748381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748394 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748420 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-11 14:54:18.748431 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748437 | orchestrator | 2025-06-11 14:54:18.748442 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-11 14:54:18.748447 | orchestrator | Wednesday 11 June 2025 14:52:11 +0000 (0:00:01.950) 0:04:15.648 ******** 2025-06-11 14:54:18.748453 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.748458 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.748464 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.748469 | orchestrator | 2025-06-11 14:54:18.748474 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-11 14:54:18.748480 | orchestrator | Wednesday 11 June 2025 14:52:14 +0000 (0:00:03.192) 0:04:18.840 ******** 2025-06-11 14:54:18.748485 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.748491 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.748496 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.748501 | orchestrator | 2025-06-11 14:54:18.748507 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-11 14:54:18.748515 | orchestrator | Wednesday 11 June 2025 14:52:17 +0000 (0:00:03.075) 0:04:21.916 ******** 2025-06-11 14:54:18.748521 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-11 14:54:18.748527 | orchestrator | 2025-06-11 14:54:18.748533 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-11 14:54:18.748543 | orchestrator | Wednesday 11 June 2025 14:52:18 +0000 (0:00:00.849) 0:04:22.765 ******** 2025-06-11 14:54:18.748548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748554 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748566 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748580 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748585 | orchestrator | 2025-06-11 14:54:18.748590 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-11 14:54:18.748596 | orchestrator | Wednesday 11 June 2025 14:52:19 +0000 (0:00:01.337) 0:04:24.102 ******** 2025-06-11 14:54:18.748602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748607 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748619 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-11 14:54:18.748635 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748641 | orchestrator | 2025-06-11 14:54:18.748646 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-11 14:54:18.748654 | orchestrator | Wednesday 11 June 2025 14:52:21 +0000 (0:00:01.648) 0:04:25.751 ******** 2025-06-11 14:54:18.748660 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748665 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748671 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748676 | orchestrator | 2025-06-11 14:54:18.748681 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-11 14:54:18.748687 | orchestrator | Wednesday 11 June 2025 14:52:22 +0000 (0:00:01.330) 0:04:27.082 ******** 2025-06-11 14:54:18.748692 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.748697 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.748703 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.748708 | orchestrator | 2025-06-11 14:54:18.748714 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-11 14:54:18.748719 | orchestrator | Wednesday 11 June 2025 14:52:25 +0000 (0:00:02.181) 0:04:29.263 ******** 2025-06-11 14:54:18.748724 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.748730 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.748735 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.748740 | orchestrator | 2025-06-11 14:54:18.748746 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-11 14:54:18.748751 | orchestrator | Wednesday 11 June 2025 14:52:27 +0000 (0:00:02.852) 0:04:32.116 ******** 2025-06-11 14:54:18.748756 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-11 14:54:18.748762 | orchestrator | 2025-06-11 14:54:18.748767 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-11 14:54:18.748772 | orchestrator | Wednesday 11 June 2025 14:52:29 +0000 (0:00:01.395) 0:04:33.512 ******** 2025-06-11 14:54:18.748778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748784 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748798 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748809 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748815 | orchestrator | 2025-06-11 14:54:18.748820 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-11 14:54:18.748830 | orchestrator | Wednesday 11 June 2025 14:52:30 +0000 (0:00:01.235) 0:04:34.747 ******** 2025-06-11 14:54:18.748835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748841 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748855 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-11 14:54:18.748866 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748871 | orchestrator | 2025-06-11 14:54:18.748877 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-11 14:54:18.748882 | orchestrator | Wednesday 11 June 2025 14:52:31 +0000 (0:00:01.295) 0:04:36.042 ******** 2025-06-11 14:54:18.748888 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.748893 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.748898 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.748903 | orchestrator | 2025-06-11 14:54:18.748909 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-11 14:54:18.748914 | orchestrator | Wednesday 11 June 2025 14:52:33 +0000 (0:00:01.755) 0:04:37.797 ******** 2025-06-11 14:54:18.748920 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.748925 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.748930 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.748936 | orchestrator | 2025-06-11 14:54:18.748941 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-11 14:54:18.748947 | orchestrator | Wednesday 11 June 2025 14:52:35 +0000 (0:00:02.395) 0:04:40.193 ******** 2025-06-11 14:54:18.748952 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.748957 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.748963 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.748968 | orchestrator | 2025-06-11 14:54:18.748973 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-11 14:54:18.748979 | orchestrator | Wednesday 11 June 2025 14:52:39 +0000 (0:00:03.121) 0:04:43.314 ******** 2025-06-11 14:54:18.748984 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.748989 | orchestrator | 2025-06-11 14:54:18.748995 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-11 14:54:18.749000 | orchestrator | Wednesday 11 June 2025 14:52:40 +0000 (0:00:01.550) 0:04:44.865 ******** 2025-06-11 14:54:18.749009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.749019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.749029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.749079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749119 | orchestrator | 2025-06-11 14:54:18.749124 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-11 14:54:18.749130 | orchestrator | Wednesday 11 June 2025 14:52:44 +0000 (0:00:03.419) 0:04:48.284 ******** 2025-06-11 14:54:18.749138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.749144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749175 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.749190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749265 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.749278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-11 14:54:18.749284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-11 14:54:18.749298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-11 14:54:18.749309 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749314 | orchestrator | 2025-06-11 14:54:18.749320 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-11 14:54:18.749325 | orchestrator | Wednesday 11 June 2025 14:52:44 +0000 (0:00:00.685) 0:04:48.970 ******** 2025-06-11 14:54:18.749331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749342 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749414 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-11 14:54:18.749430 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749435 | orchestrator | 2025-06-11 14:54:18.749441 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-11 14:54:18.749446 | orchestrator | Wednesday 11 June 2025 14:52:45 +0000 (0:00:00.991) 0:04:49.961 ******** 2025-06-11 14:54:18.749451 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.749456 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.749462 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.749467 | orchestrator | 2025-06-11 14:54:18.749472 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-11 14:54:18.749478 | orchestrator | Wednesday 11 June 2025 14:52:47 +0000 (0:00:01.357) 0:04:51.318 ******** 2025-06-11 14:54:18.749483 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.749488 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.749493 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.749499 | orchestrator | 2025-06-11 14:54:18.749504 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-11 14:54:18.749509 | orchestrator | Wednesday 11 June 2025 14:52:48 +0000 (0:00:01.910) 0:04:53.229 ******** 2025-06-11 14:54:18.749514 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.749520 | orchestrator | 2025-06-11 14:54:18.749525 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-11 14:54:18.749530 | orchestrator | Wednesday 11 June 2025 14:52:50 +0000 (0:00:01.274) 0:04:54.504 ******** 2025-06-11 14:54:18.749541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:54:18.749553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:54:18.749575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:54:18.749583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:54:18.749593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:54:18.749603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:54:18.749609 | orchestrator | 2025-06-11 14:54:18.749615 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-11 14:54:18.749620 | orchestrator | Wednesday 11 June 2025 14:52:55 +0000 (0:00:05.449) 0:04:59.954 ******** 2025-06-11 14:54:18.749639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:54:18.749646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:54:18.749655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:54:18.749666 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:54:18.749678 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:54:18.749703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:54:18.749709 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749714 | orchestrator | 2025-06-11 14:54:18.749719 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-11 14:54:18.749725 | orchestrator | Wednesday 11 June 2025 14:52:56 +0000 (0:00:00.676) 0:05:00.630 ******** 2025-06-11 14:54:18.749730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-11 14:54:18.749736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749752 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-11 14:54:18.749767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749778 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-11 14:54:18.749788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-11 14:54:18.749799 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749805 | orchestrator | 2025-06-11 14:54:18.749810 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-11 14:54:18.749815 | orchestrator | Wednesday 11 June 2025 14:52:57 +0000 (0:00:00.919) 0:05:01.549 ******** 2025-06-11 14:54:18.749821 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749826 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749831 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749837 | orchestrator | 2025-06-11 14:54:18.749842 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-11 14:54:18.749847 | orchestrator | Wednesday 11 June 2025 14:52:58 +0000 (0:00:00.802) 0:05:02.352 ******** 2025-06-11 14:54:18.749853 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.749858 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.749863 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.749868 | orchestrator | 2025-06-11 14:54:18.749886 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-11 14:54:18.749892 | orchestrator | Wednesday 11 June 2025 14:52:59 +0000 (0:00:01.338) 0:05:03.691 ******** 2025-06-11 14:54:18.749898 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.749903 | orchestrator | 2025-06-11 14:54:18.749908 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-11 14:54:18.749914 | orchestrator | Wednesday 11 June 2025 14:53:00 +0000 (0:00:01.421) 0:05:05.112 ******** 2025-06-11 14:54:18.749919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 14:54:18.749929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 14:54:18.749937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.749943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.749948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.749954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.749971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.749976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.749985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 14:54:18.749993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.749999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.750009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 14:54:18.750063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 14:54:18.750068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 14:54:18.750132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750155 | orchestrator | 2025-06-11 14:54:18.750160 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-11 14:54:18.750165 | orchestrator | Wednesday 11 June 2025 14:53:05 +0000 (0:00:04.832) 0:05:09.945 ******** 2025-06-11 14:54:18.750170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-11 14:54:18.750175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.750188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-11 14:54:18.750212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-11 14:54:18.750217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.750235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750263 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-11 14:54:18.750288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750311 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-11 14:54:18.750326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 14:54:18.750332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-11 14:54:18.750355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-11 14:54:18.750375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 14:54:18.750388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 14:54:18.750393 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750398 | orchestrator | 2025-06-11 14:54:18.750403 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-11 14:54:18.750407 | orchestrator | Wednesday 11 June 2025 14:53:06 +0000 (0:00:00.958) 0:05:10.903 ******** 2025-06-11 14:54:18.750412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750433 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750465 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-11 14:54:18.750479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-11 14:54:18.750496 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750501 | orchestrator | 2025-06-11 14:54:18.750505 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-11 14:54:18.750510 | orchestrator | Wednesday 11 June 2025 14:53:07 +0000 (0:00:00.987) 0:05:11.891 ******** 2025-06-11 14:54:18.750515 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750520 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750524 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750529 | orchestrator | 2025-06-11 14:54:18.750534 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-11 14:54:18.750538 | orchestrator | Wednesday 11 June 2025 14:53:08 +0000 (0:00:01.297) 0:05:13.189 ******** 2025-06-11 14:54:18.750543 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750548 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750552 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750557 | orchestrator | 2025-06-11 14:54:18.750562 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-11 14:54:18.750566 | orchestrator | Wednesday 11 June 2025 14:53:10 +0000 (0:00:01.336) 0:05:14.526 ******** 2025-06-11 14:54:18.750571 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.750576 | orchestrator | 2025-06-11 14:54:18.750580 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-11 14:54:18.750585 | orchestrator | Wednesday 11 June 2025 14:53:11 +0000 (0:00:01.463) 0:05:15.989 ******** 2025-06-11 14:54:18.750590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:54:18.750604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:54:18.750609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-11 14:54:18.750615 | orchestrator | 2025-06-11 14:54:18.750622 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-11 14:54:18.750627 | orchestrator | Wednesday 11 June 2025 14:53:14 +0000 (0:00:02.893) 0:05:18.882 ******** 2025-06-11 14:54:18.750632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-11 14:54:18.750637 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-11 14:54:18.750654 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-11 14:54:18.750664 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750669 | orchestrator | 2025-06-11 14:54:18.750674 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-11 14:54:18.750678 | orchestrator | Wednesday 11 June 2025 14:53:15 +0000 (0:00:00.750) 0:05:19.633 ******** 2025-06-11 14:54:18.750683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-11 14:54:18.750688 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-11 14:54:18.750698 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-11 14:54:18.750707 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750712 | orchestrator | 2025-06-11 14:54:18.750717 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-11 14:54:18.750722 | orchestrator | Wednesday 11 June 2025 14:53:16 +0000 (0:00:00.657) 0:05:20.290 ******** 2025-06-11 14:54:18.750729 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750734 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750738 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750743 | orchestrator | 2025-06-11 14:54:18.750748 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-11 14:54:18.750753 | orchestrator | Wednesday 11 June 2025 14:53:16 +0000 (0:00:00.433) 0:05:20.723 ******** 2025-06-11 14:54:18.750757 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750762 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750767 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750771 | orchestrator | 2025-06-11 14:54:18.750776 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-11 14:54:18.750781 | orchestrator | Wednesday 11 June 2025 14:53:17 +0000 (0:00:01.420) 0:05:22.143 ******** 2025-06-11 14:54:18.750786 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:54:18.750791 | orchestrator | 2025-06-11 14:54:18.750795 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-11 14:54:18.750804 | orchestrator | Wednesday 11 June 2025 14:53:19 +0000 (0:00:01.765) 0:05:23.909 ******** 2025-06-11 14:54:18.750809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-11 14:54:18.750895 | orchestrator | 2025-06-11 14:54:18.750900 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-11 14:54:18.750905 | orchestrator | Wednesday 11 June 2025 14:53:25 +0000 (0:00:06.101) 0:05:30.010 ******** 2025-06-11 14:54:18.750910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750923 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.750928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750942 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.750950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-11 14:54:18.750961 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.750965 | orchestrator | 2025-06-11 14:54:18.750970 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-11 14:54:18.750977 | orchestrator | Wednesday 11 June 2025 14:53:27 +0000 (0:00:01.340) 0:05:31.351 ******** 2025-06-11 14:54:18.751001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751021 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751045 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-11 14:54:18.751073 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751078 | orchestrator | 2025-06-11 14:54:18.751082 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-11 14:54:18.751087 | orchestrator | Wednesday 11 June 2025 14:53:28 +0000 (0:00:00.981) 0:05:32.332 ******** 2025-06-11 14:54:18.751092 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.751097 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.751101 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.751106 | orchestrator | 2025-06-11 14:54:18.751111 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-11 14:54:18.751116 | orchestrator | Wednesday 11 June 2025 14:53:29 +0000 (0:00:01.292) 0:05:33.625 ******** 2025-06-11 14:54:18.751120 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.751125 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.751134 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.751138 | orchestrator | 2025-06-11 14:54:18.751143 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-11 14:54:18.751148 | orchestrator | Wednesday 11 June 2025 14:53:31 +0000 (0:00:02.187) 0:05:35.812 ******** 2025-06-11 14:54:18.751153 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751157 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751162 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751167 | orchestrator | 2025-06-11 14:54:18.751171 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-11 14:54:18.751176 | orchestrator | Wednesday 11 June 2025 14:53:32 +0000 (0:00:00.636) 0:05:36.449 ******** 2025-06-11 14:54:18.751181 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751186 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751190 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751195 | orchestrator | 2025-06-11 14:54:18.751200 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-11 14:54:18.751207 | orchestrator | Wednesday 11 June 2025 14:53:32 +0000 (0:00:00.317) 0:05:36.766 ******** 2025-06-11 14:54:18.751212 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751216 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751221 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751226 | orchestrator | 2025-06-11 14:54:18.751231 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-11 14:54:18.751235 | orchestrator | Wednesday 11 June 2025 14:53:32 +0000 (0:00:00.323) 0:05:37.089 ******** 2025-06-11 14:54:18.751240 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751245 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751250 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751254 | orchestrator | 2025-06-11 14:54:18.751259 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-11 14:54:18.751264 | orchestrator | Wednesday 11 June 2025 14:53:33 +0000 (0:00:00.330) 0:05:37.420 ******** 2025-06-11 14:54:18.751269 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751273 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751278 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751283 | orchestrator | 2025-06-11 14:54:18.751288 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-11 14:54:18.751293 | orchestrator | Wednesday 11 June 2025 14:53:33 +0000 (0:00:00.629) 0:05:38.050 ******** 2025-06-11 14:54:18.751297 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751302 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751307 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751312 | orchestrator | 2025-06-11 14:54:18.751316 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-11 14:54:18.751321 | orchestrator | Wednesday 11 June 2025 14:53:34 +0000 (0:00:00.534) 0:05:38.584 ******** 2025-06-11 14:54:18.751326 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751331 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751335 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751340 | orchestrator | 2025-06-11 14:54:18.751345 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-11 14:54:18.751350 | orchestrator | Wednesday 11 June 2025 14:53:35 +0000 (0:00:00.676) 0:05:39.261 ******** 2025-06-11 14:54:18.751354 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751359 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751376 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751381 | orchestrator | 2025-06-11 14:54:18.751386 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-11 14:54:18.751390 | orchestrator | Wednesday 11 June 2025 14:53:35 +0000 (0:00:00.670) 0:05:39.931 ******** 2025-06-11 14:54:18.751395 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751400 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751405 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751413 | orchestrator | 2025-06-11 14:54:18.751418 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-11 14:54:18.751423 | orchestrator | Wednesday 11 June 2025 14:53:36 +0000 (0:00:00.919) 0:05:40.851 ******** 2025-06-11 14:54:18.751428 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751433 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751437 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751442 | orchestrator | 2025-06-11 14:54:18.751447 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-11 14:54:18.751455 | orchestrator | Wednesday 11 June 2025 14:53:37 +0000 (0:00:00.893) 0:05:41.745 ******** 2025-06-11 14:54:18.751460 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751464 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751469 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751474 | orchestrator | 2025-06-11 14:54:18.751479 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-11 14:54:18.751483 | orchestrator | Wednesday 11 June 2025 14:53:38 +0000 (0:00:00.873) 0:05:42.619 ******** 2025-06-11 14:54:18.751488 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.751493 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.751498 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.751503 | orchestrator | 2025-06-11 14:54:18.751507 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-11 14:54:18.751512 | orchestrator | Wednesday 11 June 2025 14:53:48 +0000 (0:00:10.286) 0:05:52.905 ******** 2025-06-11 14:54:18.751517 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751522 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751527 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751531 | orchestrator | 2025-06-11 14:54:18.751536 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-11 14:54:18.751541 | orchestrator | Wednesday 11 June 2025 14:53:49 +0000 (0:00:00.786) 0:05:53.692 ******** 2025-06-11 14:54:18.751546 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.751550 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.751555 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.751560 | orchestrator | 2025-06-11 14:54:18.751565 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-11 14:54:18.751570 | orchestrator | Wednesday 11 June 2025 14:53:57 +0000 (0:00:08.510) 0:06:02.203 ******** 2025-06-11 14:54:18.751574 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751579 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751584 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751588 | orchestrator | 2025-06-11 14:54:18.751593 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-11 14:54:18.751598 | orchestrator | Wednesday 11 June 2025 14:54:00 +0000 (0:00:02.759) 0:06:04.962 ******** 2025-06-11 14:54:18.751603 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:54:18.751608 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:54:18.751612 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:54:18.751617 | orchestrator | 2025-06-11 14:54:18.751622 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-11 14:54:18.751627 | orchestrator | Wednesday 11 June 2025 14:54:10 +0000 (0:00:09.607) 0:06:14.570 ******** 2025-06-11 14:54:18.751631 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751636 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751641 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751646 | orchestrator | 2025-06-11 14:54:18.751650 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-11 14:54:18.751655 | orchestrator | Wednesday 11 June 2025 14:54:11 +0000 (0:00:00.755) 0:06:15.326 ******** 2025-06-11 14:54:18.751660 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751667 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751672 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751677 | orchestrator | 2025-06-11 14:54:18.751682 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-11 14:54:18.751690 | orchestrator | Wednesday 11 June 2025 14:54:11 +0000 (0:00:00.349) 0:06:15.676 ******** 2025-06-11 14:54:18.751695 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751699 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751704 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751709 | orchestrator | 2025-06-11 14:54:18.751714 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-11 14:54:18.751719 | orchestrator | Wednesday 11 June 2025 14:54:11 +0000 (0:00:00.358) 0:06:16.034 ******** 2025-06-11 14:54:18.751724 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751728 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751733 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751738 | orchestrator | 2025-06-11 14:54:18.751743 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-11 14:54:18.751748 | orchestrator | Wednesday 11 June 2025 14:54:12 +0000 (0:00:00.333) 0:06:16.368 ******** 2025-06-11 14:54:18.751752 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751757 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751762 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751767 | orchestrator | 2025-06-11 14:54:18.751771 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-11 14:54:18.751776 | orchestrator | Wednesday 11 June 2025 14:54:12 +0000 (0:00:00.795) 0:06:17.163 ******** 2025-06-11 14:54:18.751781 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:54:18.751786 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:54:18.751790 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:54:18.751795 | orchestrator | 2025-06-11 14:54:18.751800 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-11 14:54:18.751805 | orchestrator | Wednesday 11 June 2025 14:54:13 +0000 (0:00:00.402) 0:06:17.565 ******** 2025-06-11 14:54:18.751809 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751814 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751819 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751824 | orchestrator | 2025-06-11 14:54:18.751829 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-11 14:54:18.751833 | orchestrator | Wednesday 11 June 2025 14:54:14 +0000 (0:00:00.980) 0:06:18.546 ******** 2025-06-11 14:54:18.751838 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:54:18.751843 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:54:18.751848 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:54:18.751852 | orchestrator | 2025-06-11 14:54:18.751857 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:54:18.751862 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-11 14:54:18.751868 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-11 14:54:18.751876 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-11 14:54:18.751881 | orchestrator | 2025-06-11 14:54:18.751885 | orchestrator | 2025-06-11 14:54:18.751890 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:54:18.751895 | orchestrator | Wednesday 11 June 2025 14:54:15 +0000 (0:00:01.265) 0:06:19.812 ******** 2025-06-11 14:54:18.751900 | orchestrator | =============================================================================== 2025-06-11 14:54:18.751905 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.29s 2025-06-11 14:54:18.751909 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.61s 2025-06-11 14:54:18.751914 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.51s 2025-06-11 14:54:18.751919 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.10s 2025-06-11 14:54:18.751927 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.07s 2025-06-11 14:54:18.751932 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.45s 2025-06-11 14:54:18.751937 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.08s 2025-06-11 14:54:18.751942 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.83s 2025-06-11 14:54:18.751947 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.35s 2025-06-11 14:54:18.751951 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.31s 2025-06-11 14:54:18.751956 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.30s 2025-06-11 14:54:18.751961 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.16s 2025-06-11 14:54:18.751966 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.14s 2025-06-11 14:54:18.751970 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.10s 2025-06-11 14:54:18.751975 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.90s 2025-06-11 14:54:18.751980 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.87s 2025-06-11 14:54:18.751985 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.74s 2025-06-11 14:54:18.751989 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.71s 2025-06-11 14:54:18.751994 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.62s 2025-06-11 14:54:18.751999 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.60s 2025-06-11 14:54:21.773346 | orchestrator | 2025-06-11 14:54:21 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:21.774439 | orchestrator | 2025-06-11 14:54:21 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:21.775304 | orchestrator | 2025-06-11 14:54:21 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:21.776232 | orchestrator | 2025-06-11 14:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:24.812009 | orchestrator | 2025-06-11 14:54:24 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:24.813120 | orchestrator | 2025-06-11 14:54:24 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:24.813496 | orchestrator | 2025-06-11 14:54:24 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:24.813633 | orchestrator | 2025-06-11 14:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:27.850350 | orchestrator | 2025-06-11 14:54:27 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:27.851068 | orchestrator | 2025-06-11 14:54:27 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:27.854318 | orchestrator | 2025-06-11 14:54:27 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:27.854389 | orchestrator | 2025-06-11 14:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:30.897327 | orchestrator | 2025-06-11 14:54:30 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:30.897566 | orchestrator | 2025-06-11 14:54:30 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:30.905797 | orchestrator | 2025-06-11 14:54:30 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:30.905875 | orchestrator | 2025-06-11 14:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:33.941234 | orchestrator | 2025-06-11 14:54:33 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:33.941674 | orchestrator | 2025-06-11 14:54:33 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:33.942610 | orchestrator | 2025-06-11 14:54:33 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:33.942649 | orchestrator | 2025-06-11 14:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:36.983680 | orchestrator | 2025-06-11 14:54:36 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:36.984333 | orchestrator | 2025-06-11 14:54:36 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:36.985151 | orchestrator | 2025-06-11 14:54:36 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:36.985289 | orchestrator | 2025-06-11 14:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:40.025529 | orchestrator | 2025-06-11 14:54:40 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:40.026381 | orchestrator | 2025-06-11 14:54:40 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:40.029397 | orchestrator | 2025-06-11 14:54:40 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:40.029425 | orchestrator | 2025-06-11 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:43.069086 | orchestrator | 2025-06-11 14:54:43 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:43.070798 | orchestrator | 2025-06-11 14:54:43 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:43.070827 | orchestrator | 2025-06-11 14:54:43 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:43.070835 | orchestrator | 2025-06-11 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:46.106149 | orchestrator | 2025-06-11 14:54:46 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:46.109674 | orchestrator | 2025-06-11 14:54:46 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:46.109754 | orchestrator | 2025-06-11 14:54:46 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:46.109769 | orchestrator | 2025-06-11 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:49.152748 | orchestrator | 2025-06-11 14:54:49 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:49.154915 | orchestrator | 2025-06-11 14:54:49 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:49.157420 | orchestrator | 2025-06-11 14:54:49 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:49.157449 | orchestrator | 2025-06-11 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:52.201615 | orchestrator | 2025-06-11 14:54:52 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:52.203459 | orchestrator | 2025-06-11 14:54:52 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:52.205164 | orchestrator | 2025-06-11 14:54:52 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:52.205203 | orchestrator | 2025-06-11 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:55.242819 | orchestrator | 2025-06-11 14:54:55 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:55.242945 | orchestrator | 2025-06-11 14:54:55 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:55.243905 | orchestrator | 2025-06-11 14:54:55 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:55.243969 | orchestrator | 2025-06-11 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:54:58.278827 | orchestrator | 2025-06-11 14:54:58 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:54:58.280613 | orchestrator | 2025-06-11 14:54:58 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:54:58.282372 | orchestrator | 2025-06-11 14:54:58 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:54:58.282416 | orchestrator | 2025-06-11 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:01.334943 | orchestrator | 2025-06-11 14:55:01 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:01.339084 | orchestrator | 2025-06-11 14:55:01 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:01.340437 | orchestrator | 2025-06-11 14:55:01 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:01.340470 | orchestrator | 2025-06-11 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:04.389586 | orchestrator | 2025-06-11 14:55:04 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:04.392294 | orchestrator | 2025-06-11 14:55:04 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:04.394528 | orchestrator | 2025-06-11 14:55:04 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:04.394770 | orchestrator | 2025-06-11 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:07.439097 | orchestrator | 2025-06-11 14:55:07 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:07.439546 | orchestrator | 2025-06-11 14:55:07 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:07.440925 | orchestrator | 2025-06-11 14:55:07 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:07.441582 | orchestrator | 2025-06-11 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:10.490409 | orchestrator | 2025-06-11 14:55:10 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:10.492386 | orchestrator | 2025-06-11 14:55:10 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:10.494199 | orchestrator | 2025-06-11 14:55:10 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:10.494784 | orchestrator | 2025-06-11 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:13.536607 | orchestrator | 2025-06-11 14:55:13 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:13.537698 | orchestrator | 2025-06-11 14:55:13 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:13.539143 | orchestrator | 2025-06-11 14:55:13 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:13.539545 | orchestrator | 2025-06-11 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:16.594123 | orchestrator | 2025-06-11 14:55:16 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:16.596006 | orchestrator | 2025-06-11 14:55:16 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:16.598713 | orchestrator | 2025-06-11 14:55:16 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:16.599089 | orchestrator | 2025-06-11 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:19.651520 | orchestrator | 2025-06-11 14:55:19 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:19.653096 | orchestrator | 2025-06-11 14:55:19 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:19.655645 | orchestrator | 2025-06-11 14:55:19 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:19.655686 | orchestrator | 2025-06-11 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:22.710981 | orchestrator | 2025-06-11 14:55:22 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:22.711261 | orchestrator | 2025-06-11 14:55:22 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:22.713484 | orchestrator | 2025-06-11 14:55:22 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:22.713523 | orchestrator | 2025-06-11 14:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:25.768533 | orchestrator | 2025-06-11 14:55:25 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:25.770003 | orchestrator | 2025-06-11 14:55:25 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:25.773483 | orchestrator | 2025-06-11 14:55:25 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:25.773592 | orchestrator | 2025-06-11 14:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:28.820014 | orchestrator | 2025-06-11 14:55:28 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:28.820402 | orchestrator | 2025-06-11 14:55:28 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:28.821212 | orchestrator | 2025-06-11 14:55:28 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:28.821240 | orchestrator | 2025-06-11 14:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:31.856991 | orchestrator | 2025-06-11 14:55:31 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:31.858542 | orchestrator | 2025-06-11 14:55:31 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:31.860436 | orchestrator | 2025-06-11 14:55:31 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:31.860464 | orchestrator | 2025-06-11 14:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:34.912421 | orchestrator | 2025-06-11 14:55:34 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:34.912525 | orchestrator | 2025-06-11 14:55:34 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:34.913389 | orchestrator | 2025-06-11 14:55:34 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:34.913985 | orchestrator | 2025-06-11 14:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:37.965352 | orchestrator | 2025-06-11 14:55:37 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:37.966122 | orchestrator | 2025-06-11 14:55:37 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:37.967485 | orchestrator | 2025-06-11 14:55:37 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:37.967610 | orchestrator | 2025-06-11 14:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:41.015644 | orchestrator | 2025-06-11 14:55:41 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:41.015722 | orchestrator | 2025-06-11 14:55:41 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:41.016424 | orchestrator | 2025-06-11 14:55:41 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:41.016446 | orchestrator | 2025-06-11 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:44.053182 | orchestrator | 2025-06-11 14:55:44 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:44.054787 | orchestrator | 2025-06-11 14:55:44 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:44.057048 | orchestrator | 2025-06-11 14:55:44 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:44.057255 | orchestrator | 2025-06-11 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:47.102083 | orchestrator | 2025-06-11 14:55:47 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:47.107126 | orchestrator | 2025-06-11 14:55:47 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:47.107204 | orchestrator | 2025-06-11 14:55:47 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:47.107228 | orchestrator | 2025-06-11 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:50.154837 | orchestrator | 2025-06-11 14:55:50 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:50.156231 | orchestrator | 2025-06-11 14:55:50 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:50.157764 | orchestrator | 2025-06-11 14:55:50 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:50.157848 | orchestrator | 2025-06-11 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:53.201505 | orchestrator | 2025-06-11 14:55:53 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:53.204221 | orchestrator | 2025-06-11 14:55:53 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:53.204982 | orchestrator | 2025-06-11 14:55:53 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:53.205014 | orchestrator | 2025-06-11 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:56.256667 | orchestrator | 2025-06-11 14:55:56 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:56.259832 | orchestrator | 2025-06-11 14:55:56 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:56.262257 | orchestrator | 2025-06-11 14:55:56 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:56.262391 | orchestrator | 2025-06-11 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:55:59.310462 | orchestrator | 2025-06-11 14:55:59 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:55:59.311470 | orchestrator | 2025-06-11 14:55:59 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:55:59.312861 | orchestrator | 2025-06-11 14:55:59 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:55:59.312999 | orchestrator | 2025-06-11 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:02.360552 | orchestrator | 2025-06-11 14:56:02 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:02.362874 | orchestrator | 2025-06-11 14:56:02 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:02.431624 | orchestrator | 2025-06-11 14:56:02 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:02.431679 | orchestrator | 2025-06-11 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:05.408788 | orchestrator | 2025-06-11 14:56:05 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:05.410558 | orchestrator | 2025-06-11 14:56:05 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:05.411946 | orchestrator | 2025-06-11 14:56:05 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:05.412110 | orchestrator | 2025-06-11 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:08.459747 | orchestrator | 2025-06-11 14:56:08 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:08.460479 | orchestrator | 2025-06-11 14:56:08 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:08.462749 | orchestrator | 2025-06-11 14:56:08 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:08.462775 | orchestrator | 2025-06-11 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:11.514480 | orchestrator | 2025-06-11 14:56:11 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:11.515436 | orchestrator | 2025-06-11 14:56:11 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:11.518866 | orchestrator | 2025-06-11 14:56:11 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:11.518904 | orchestrator | 2025-06-11 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:14.566880 | orchestrator | 2025-06-11 14:56:14 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:14.568091 | orchestrator | 2025-06-11 14:56:14 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:14.569607 | orchestrator | 2025-06-11 14:56:14 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:14.569689 | orchestrator | 2025-06-11 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:17.624334 | orchestrator | 2025-06-11 14:56:17 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:17.625851 | orchestrator | 2025-06-11 14:56:17 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:17.627530 | orchestrator | 2025-06-11 14:56:17 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:17.627969 | orchestrator | 2025-06-11 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:20.681132 | orchestrator | 2025-06-11 14:56:20 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state STARTED 2025-06-11 14:56:20.683527 | orchestrator | 2025-06-11 14:56:20 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:20.685854 | orchestrator | 2025-06-11 14:56:20 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:20.686410 | orchestrator | 2025-06-11 14:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:23.738572 | orchestrator | 2025-06-11 14:56:23 | INFO  | Task 9e9d757a-c49d-4061-9c82-b3f471ed66eb is in state SUCCESS 2025-06-11 14:56:23.741635 | orchestrator | 2025-06-11 14:56:23.741683 | orchestrator | 2025-06-11 14:56:23.741696 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-11 14:56:23.741708 | orchestrator | 2025-06-11 14:56:23.741735 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-11 14:56:23.741780 | orchestrator | Wednesday 11 June 2025 14:45:13 +0000 (0:00:00.720) 0:00:00.720 ******** 2025-06-11 14:56:23.741834 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.741879 | orchestrator | 2025-06-11 14:56:23.741891 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-11 14:56:23.741901 | orchestrator | Wednesday 11 June 2025 14:45:14 +0000 (0:00:00.975) 0:00:01.695 ******** 2025-06-11 14:56:23.741912 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.741924 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.741935 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.741946 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.741956 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.741966 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.741977 | orchestrator | 2025-06-11 14:56:23.741988 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-11 14:56:23.741998 | orchestrator | Wednesday 11 June 2025 14:45:16 +0000 (0:00:01.661) 0:00:03.357 ******** 2025-06-11 14:56:23.742009 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.742800 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.742864 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.742878 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.742889 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.742948 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.742960 | orchestrator | 2025-06-11 14:56:23.742971 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-11 14:56:23.742982 | orchestrator | Wednesday 11 June 2025 14:45:17 +0000 (0:00:00.836) 0:00:04.193 ******** 2025-06-11 14:56:23.742993 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743003 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743015 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743025 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743036 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743046 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743057 | orchestrator | 2025-06-11 14:56:23.743067 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-11 14:56:23.743078 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:01.086) 0:00:05.280 ******** 2025-06-11 14:56:23.743088 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743099 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743109 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743120 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743130 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743141 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743151 | orchestrator | 2025-06-11 14:56:23.743195 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-11 14:56:23.743207 | orchestrator | Wednesday 11 June 2025 14:45:18 +0000 (0:00:00.612) 0:00:05.893 ******** 2025-06-11 14:56:23.743217 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743228 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743238 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743249 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743259 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743269 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743319 | orchestrator | 2025-06-11 14:56:23.743331 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-11 14:56:23.743341 | orchestrator | Wednesday 11 June 2025 14:45:19 +0000 (0:00:00.640) 0:00:06.533 ******** 2025-06-11 14:56:23.743374 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743385 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743397 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743409 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743421 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743432 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743444 | orchestrator | 2025-06-11 14:56:23.743457 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-11 14:56:23.743470 | orchestrator | Wednesday 11 June 2025 14:45:20 +0000 (0:00:00.989) 0:00:07.523 ******** 2025-06-11 14:56:23.743481 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.743493 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.743504 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.743514 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.743524 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.743535 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.743545 | orchestrator | 2025-06-11 14:56:23.743556 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-11 14:56:23.743567 | orchestrator | Wednesday 11 June 2025 14:45:21 +0000 (0:00:00.720) 0:00:08.243 ******** 2025-06-11 14:56:23.743577 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743588 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743683 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743694 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743705 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743715 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743726 | orchestrator | 2025-06-11 14:56:23.743736 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-11 14:56:23.743747 | orchestrator | Wednesday 11 June 2025 14:45:21 +0000 (0:00:00.876) 0:00:09.120 ******** 2025-06-11 14:56:23.743758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:56:23.743769 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.743780 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.743790 | orchestrator | 2025-06-11 14:56:23.743801 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-11 14:56:23.743812 | orchestrator | Wednesday 11 June 2025 14:45:22 +0000 (0:00:00.649) 0:00:09.769 ******** 2025-06-11 14:56:23.743822 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.743832 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.743843 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.743853 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.743864 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.743874 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.743885 | orchestrator | 2025-06-11 14:56:23.743911 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-11 14:56:23.743931 | orchestrator | Wednesday 11 June 2025 14:45:23 +0000 (0:00:01.008) 0:00:10.777 ******** 2025-06-11 14:56:23.743942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:56:23.743953 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.743964 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.744008 | orchestrator | 2025-06-11 14:56:23.744021 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-11 14:56:23.744032 | orchestrator | Wednesday 11 June 2025 14:45:26 +0000 (0:00:03.192) 0:00:13.970 ******** 2025-06-11 14:56:23.744098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-11 14:56:23.744110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-11 14:56:23.744121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-11 14:56:23.744132 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744151 | orchestrator | 2025-06-11 14:56:23.744162 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-11 14:56:23.744173 | orchestrator | Wednesday 11 June 2025 14:45:27 +0000 (0:00:00.741) 0:00:14.712 ******** 2025-06-11 14:56:23.744185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744221 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744232 | orchestrator | 2025-06-11 14:56:23.744243 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-11 14:56:23.744254 | orchestrator | Wednesday 11 June 2025 14:45:28 +0000 (0:00:01.055) 0:00:15.768 ******** 2025-06-11 14:56:23.744267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744495 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744504 | orchestrator | 2025-06-11 14:56:23.744606 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-11 14:56:23.744617 | orchestrator | Wednesday 11 June 2025 14:45:28 +0000 (0:00:00.337) 0:00:16.105 ******** 2025-06-11 14:56:23.744641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-11 14:45:24.345737', 'end': '2025-06-11 14:45:24.693396', 'delta': '0:00:00.347659', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-11 14:45:25.463979', 'end': '2025-06-11 14:45:25.789944', 'delta': '0:00:00.325965', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-11 14:45:26.368538', 'end': '2025-06-11 14:45:26.651154', 'delta': '0:00:00.282616', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.744690 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744700 | orchestrator | 2025-06-11 14:56:23.744709 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-11 14:56:23.744719 | orchestrator | Wednesday 11 June 2025 14:45:29 +0000 (0:00:00.169) 0:00:16.274 ******** 2025-06-11 14:56:23.744728 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.744738 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.744747 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.744757 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.744766 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.744775 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.744784 | orchestrator | 2025-06-11 14:56:23.744794 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-11 14:56:23.744803 | orchestrator | Wednesday 11 June 2025 14:45:30 +0000 (0:00:01.175) 0:00:17.450 ******** 2025-06-11 14:56:23.744813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.744822 | orchestrator | 2025-06-11 14:56:23.744838 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-11 14:56:23.744855 | orchestrator | Wednesday 11 June 2025 14:45:31 +0000 (0:00:00.928) 0:00:18.378 ******** 2025-06-11 14:56:23.744872 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744889 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.744905 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.744915 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.744924 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.744934 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.744943 | orchestrator | 2025-06-11 14:56:23.744953 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-11 14:56:23.744962 | orchestrator | Wednesday 11 June 2025 14:45:32 +0000 (0:00:00.975) 0:00:19.356 ******** 2025-06-11 14:56:23.744971 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.744981 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.744990 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745000 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745052 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745090 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745101 | orchestrator | 2025-06-11 14:56:23.745110 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-11 14:56:23.745120 | orchestrator | Wednesday 11 June 2025 14:45:33 +0000 (0:00:01.366) 0:00:20.723 ******** 2025-06-11 14:56:23.745129 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745249 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745259 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745405 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745426 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745436 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745445 | orchestrator | 2025-06-11 14:56:23.745455 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-11 14:56:23.745465 | orchestrator | Wednesday 11 June 2025 14:45:34 +0000 (0:00:00.830) 0:00:21.553 ******** 2025-06-11 14:56:23.745474 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745483 | orchestrator | 2025-06-11 14:56:23.745493 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-11 14:56:23.745502 | orchestrator | Wednesday 11 June 2025 14:45:34 +0000 (0:00:00.097) 0:00:21.651 ******** 2025-06-11 14:56:23.745512 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745521 | orchestrator | 2025-06-11 14:56:23.745531 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-11 14:56:23.745540 | orchestrator | Wednesday 11 June 2025 14:45:34 +0000 (0:00:00.198) 0:00:21.850 ******** 2025-06-11 14:56:23.745550 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745559 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745568 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745578 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745588 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745597 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745607 | orchestrator | 2025-06-11 14:56:23.745626 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-11 14:56:23.745642 | orchestrator | Wednesday 11 June 2025 14:45:35 +0000 (0:00:00.735) 0:00:22.585 ******** 2025-06-11 14:56:23.745652 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745661 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745671 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745680 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745689 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745699 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745708 | orchestrator | 2025-06-11 14:56:23.745718 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-11 14:56:23.745727 | orchestrator | Wednesday 11 June 2025 14:45:36 +0000 (0:00:01.020) 0:00:23.605 ******** 2025-06-11 14:56:23.745736 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745746 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745755 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745765 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745774 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745783 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745793 | orchestrator | 2025-06-11 14:56:23.745802 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-11 14:56:23.745811 | orchestrator | Wednesday 11 June 2025 14:45:37 +0000 (0:00:00.996) 0:00:24.602 ******** 2025-06-11 14:56:23.745821 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745830 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745840 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745849 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745858 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745868 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745877 | orchestrator | 2025-06-11 14:56:23.745886 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-11 14:56:23.745896 | orchestrator | Wednesday 11 June 2025 14:45:38 +0000 (0:00:01.019) 0:00:25.621 ******** 2025-06-11 14:56:23.745905 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.745915 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.745924 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.745933 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.745942 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.745952 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.745961 | orchestrator | 2025-06-11 14:56:23.745978 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-11 14:56:23.745988 | orchestrator | Wednesday 11 June 2025 14:45:39 +0000 (0:00:00.712) 0:00:26.334 ******** 2025-06-11 14:56:23.745997 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.746007 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.746056 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.746068 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.746078 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.746088 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.746097 | orchestrator | 2025-06-11 14:56:23.746107 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-11 14:56:23.746116 | orchestrator | Wednesday 11 June 2025 14:45:39 +0000 (0:00:00.799) 0:00:27.134 ******** 2025-06-11 14:56:23.746125 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.746135 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.746144 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.746153 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.746163 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.746172 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.746181 | orchestrator | 2025-06-11 14:56:23.746191 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-11 14:56:23.746200 | orchestrator | Wednesday 11 June 2025 14:45:40 +0000 (0:00:00.780) 0:00:27.915 ******** 2025-06-11 14:56:23.746212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a', 'dm-uuid-LVM-qVRyAxwlJvte8cTNXy3Q4ieDHHetj3deFYwX2dPbY3zKfgDtHZrzIE9r06eLkkYO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32', 'dm-uuid-LVM-9ctOp4BFEl0FojxVV506NxxMS68q2DXHMxe31gAQeSYsjeX7eOnl2h2wNXngqQ2x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957', 'dm-uuid-LVM-EbyCR13qjFTphmQN19BXO3d4n1cvwa4haVL98gcncL02tG3KA712BAlcE1qyAVah'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41', 'dm-uuid-LVM-MdsAZtVH1G7DkfJmEQHVDEZxrg9oMpJP0d3ZOtz96FrlSOfd8B0hZQ1CkTL0r92D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0x4c5h-mp39-nMoR-hRdC-1mio-j0O1-u14n29', 'scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299', 'scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjl1Hs-KGha-577H-PI94-OcVY-YPfK-kG6ndB', 'scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3', 'scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc', 'scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d', 'dm-uuid-LVM-OZhBBziM30Sv33izNUJCCpS1ZmIlNIDNGZMmdZdnb82chb3ij6QUzfbwJKZdIPA4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTUHR3-cU3L-NQI4-ePP2-iL5O-Ympv-XUs7Dw', 'scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86', 'scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1', 'dm-uuid-LVM-kgQ11RSuUfOfaFhh0TgRjAWKWH7JHXuUvHaRAgQy1WMqaNvnF3uD6Jn1dgDVgtwG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH3T3i-Dn3M-XNKe-Lyl1-pcgd-aURa-0aARjI', 'scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9', 'scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b', 'scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746731 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.746741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.746928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EZ46eo-ukF5-k0SP-GANR-L15Q-lcyW-RFGZXD', 'scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b', 'scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ELZhyh-Homk-4KJX-dJ89-JbC9-K2tK-3FJ5f5', 'scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961', 'scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86', 'scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.746992 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.747008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.747017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.747109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.747129 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.747138 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.747146 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.747154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:56:23.747233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.747251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:56:23.747263 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.747287 | orchestrator | 2025-06-11 14:56:23.747296 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-11 14:56:23.747304 | orchestrator | Wednesday 11 June 2025 14:45:42 +0000 (0:00:01.775) 0:00:29.690 ******** 2025-06-11 14:56:23.747313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a', 'dm-uuid-LVM-qVRyAxwlJvte8cTNXy3Q4ieDHHetj3deFYwX2dPbY3zKfgDtHZrzIE9r06eLkkYO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32', 'dm-uuid-LVM-9ctOp4BFEl0FojxVV506NxxMS68q2DXHMxe31gAQeSYsjeX7eOnl2h2wNXngqQ2x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957', 'dm-uuid-LVM-EbyCR13qjFTphmQN19BXO3d4n1cvwa4haVL98gcncL02tG3KA712BAlcE1qyAVah'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41', 'dm-uuid-LVM-MdsAZtVH1G7DkfJmEQHVDEZxrg9oMpJP0d3ZOtz96FrlSOfd8B0hZQ1CkTL0r92D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0x4c5h-mp39-nMoR-hRdC-1mio-j0O1-u14n29', 'scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299', 'scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d', 'dm-uuid-LVM-OZhBBziM30Sv33izNUJCCpS1ZmIlNIDNGZMmdZdnb82chb3ij6QUzfbwJKZdIPA4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjl1Hs-KGha-577H-PI94-OcVY-YPfK-kG6ndB', 'scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3', 'scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1', 'dm-uuid-LVM-kgQ11RSuUfOfaFhh0TgRjAWKWH7JHXuUvHaRAgQy1WMqaNvnF3uD6Jn1dgDVgtwG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc', 'scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.747921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTUHR3-cU3L-NQI4-ePP2-iL5O-Ympv-XUs7Dw', 'scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86', 'scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748096 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.748133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH3T3i-Dn3M-XNKe-Lyl1-pcgd-aURa-0aARjI', 'scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9', 'scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b', 'scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748225 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748248 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748317 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748337 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748421 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_654660fe-f50d-4b40-a68e-7b359b072d1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748509 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EZ46eo-ukF5-k0SP-GANR-L15Q-lcyW-RFGZXD', 'scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b', 'scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748544 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ELZhyh-Homk-4KJX-dJ89-JbC9-K2tK-3FJ5f5', 'scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961', 'scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86', 'scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748589 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.748608 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748620 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748638 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748649 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748660 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748671 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748694 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748706 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.748717 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.748728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d16ba3d-3882-40bf-a888-e0945c42bfad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748760 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748783 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748795 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748813 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.748824 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748835 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748847 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748858 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748880 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748892 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748910 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0a6b6f4b-c7cd-4123-9b5f-1f0b2c283d7b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748923 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:56:23.748934 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.748945 | orchestrator | 2025-06-11 14:56:23.748957 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-11 14:56:23.748969 | orchestrator | Wednesday 11 June 2025 14:45:44 +0000 (0:00:01.718) 0:00:31.408 ******** 2025-06-11 14:56:23.748985 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.748996 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.749007 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.749022 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.749042 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.749053 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.749063 | orchestrator | 2025-06-11 14:56:23.749074 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-11 14:56:23.749085 | orchestrator | Wednesday 11 June 2025 14:45:45 +0000 (0:00:01.451) 0:00:32.860 ******** 2025-06-11 14:56:23.749096 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.749107 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.749117 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.749127 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.749138 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.749148 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.749159 | orchestrator | 2025-06-11 14:56:23.749169 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-11 14:56:23.749180 | orchestrator | Wednesday 11 June 2025 14:45:46 +0000 (0:00:00.744) 0:00:33.605 ******** 2025-06-11 14:56:23.749190 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.749201 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.749211 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.749222 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.749232 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.749243 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.749253 | orchestrator | 2025-06-11 14:56:23.749264 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-11 14:56:23.749334 | orchestrator | Wednesday 11 June 2025 14:45:47 +0000 (0:00:00.976) 0:00:34.581 ******** 2025-06-11 14:56:23.749346 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.749357 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.749369 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.749380 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.749392 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.749403 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.749415 | orchestrator | 2025-06-11 14:56:23.749426 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-11 14:56:23.749438 | orchestrator | Wednesday 11 June 2025 14:45:48 +0000 (0:00:00.980) 0:00:35.562 ******** 2025-06-11 14:56:23.749449 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.749461 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.749472 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.749484 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.749496 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.749508 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.749519 | orchestrator | 2025-06-11 14:56:23.749531 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-11 14:56:23.749543 | orchestrator | Wednesday 11 June 2025 14:45:49 +0000 (0:00:00.866) 0:00:36.428 ******** 2025-06-11 14:56:23.749555 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.749566 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.749578 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.749590 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.749601 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.749613 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.749624 | orchestrator | 2025-06-11 14:56:23.749636 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-11 14:56:23.749648 | orchestrator | Wednesday 11 June 2025 14:45:49 +0000 (0:00:00.598) 0:00:37.027 ******** 2025-06-11 14:56:23.749660 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-11 14:56:23.749672 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-11 14:56:23.749684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-11 14:56:23.749695 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-11 14:56:23.749707 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-11 14:56:23.749719 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-11 14:56:23.749737 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-11 14:56:23.749748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-11 14:56:23.749759 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-11 14:56:23.749770 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-11 14:56:23.749780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-11 14:56:23.749791 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-11 14:56:23.749801 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-11 14:56:23.749811 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-11 14:56:23.749822 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-11 14:56:23.749833 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-11 14:56:23.749844 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-11 14:56:23.749854 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-11 14:56:23.749865 | orchestrator | 2025-06-11 14:56:23.749875 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-11 14:56:23.749886 | orchestrator | Wednesday 11 June 2025 14:45:52 +0000 (0:00:02.736) 0:00:39.764 ******** 2025-06-11 14:56:23.749897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-11 14:56:23.749907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-11 14:56:23.749918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-11 14:56:23.749929 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.749940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-11 14:56:23.749950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-11 14:56:23.749961 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-11 14:56:23.749971 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.749982 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-11 14:56:23.749993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-11 14:56:23.750011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-11 14:56:23.750104 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.750134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:56:23.750155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:56:23.750177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:56:23.750197 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.750217 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-11 14:56:23.750228 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-11 14:56:23.750239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-11 14:56:23.750250 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.750261 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-11 14:56:23.750291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-11 14:56:23.750303 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-11 14:56:23.750314 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.750324 | orchestrator | 2025-06-11 14:56:23.750336 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-11 14:56:23.750355 | orchestrator | Wednesday 11 June 2025 14:45:53 +0000 (0:00:01.065) 0:00:40.829 ******** 2025-06-11 14:56:23.750374 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.750395 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.750416 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.750436 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.750453 | orchestrator | 2025-06-11 14:56:23.750465 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-11 14:56:23.750486 | orchestrator | Wednesday 11 June 2025 14:45:54 +0000 (0:00:00.766) 0:00:41.595 ******** 2025-06-11 14:56:23.750497 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750507 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.750518 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.750528 | orchestrator | 2025-06-11 14:56:23.750539 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-11 14:56:23.750550 | orchestrator | Wednesday 11 June 2025 14:45:54 +0000 (0:00:00.302) 0:00:41.898 ******** 2025-06-11 14:56:23.750561 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750571 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.750582 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.750592 | orchestrator | 2025-06-11 14:56:23.750603 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-11 14:56:23.750614 | orchestrator | Wednesday 11 June 2025 14:45:55 +0000 (0:00:00.538) 0:00:42.436 ******** 2025-06-11 14:56:23.750624 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750635 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.750645 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.750656 | orchestrator | 2025-06-11 14:56:23.750667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-11 14:56:23.750678 | orchestrator | Wednesday 11 June 2025 14:45:55 +0000 (0:00:00.586) 0:00:43.022 ******** 2025-06-11 14:56:23.750689 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.750699 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.750710 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.750721 | orchestrator | 2025-06-11 14:56:23.750732 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-11 14:56:23.750742 | orchestrator | Wednesday 11 June 2025 14:45:56 +0000 (0:00:00.554) 0:00:43.577 ******** 2025-06-11 14:56:23.750753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.750763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.750774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.750784 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750795 | orchestrator | 2025-06-11 14:56:23.750806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-11 14:56:23.750816 | orchestrator | Wednesday 11 June 2025 14:45:56 +0000 (0:00:00.289) 0:00:43.866 ******** 2025-06-11 14:56:23.750827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.750838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.750848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.750859 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750870 | orchestrator | 2025-06-11 14:56:23.750880 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-11 14:56:23.750891 | orchestrator | Wednesday 11 June 2025 14:45:57 +0000 (0:00:00.355) 0:00:44.222 ******** 2025-06-11 14:56:23.750902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.750912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.750923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.750933 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.750944 | orchestrator | 2025-06-11 14:56:23.750954 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-11 14:56:23.750965 | orchestrator | Wednesday 11 June 2025 14:45:57 +0000 (0:00:00.707) 0:00:44.930 ******** 2025-06-11 14:56:23.750976 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.750987 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.750997 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.751008 | orchestrator | 2025-06-11 14:56:23.751019 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-11 14:56:23.751036 | orchestrator | Wednesday 11 June 2025 14:45:58 +0000 (0:00:00.719) 0:00:45.649 ******** 2025-06-11 14:56:23.751047 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-11 14:56:23.751058 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-11 14:56:23.751069 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-11 14:56:23.751079 | orchestrator | 2025-06-11 14:56:23.751107 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-11 14:56:23.751118 | orchestrator | Wednesday 11 June 2025 14:45:59 +0000 (0:00:00.831) 0:00:46.481 ******** 2025-06-11 14:56:23.751135 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:56:23.751146 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.751157 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.751170 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-11 14:56:23.751189 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-11 14:56:23.751208 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-11 14:56:23.751228 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-11 14:56:23.751239 | orchestrator | 2025-06-11 14:56:23.751250 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-11 14:56:23.751261 | orchestrator | Wednesday 11 June 2025 14:46:00 +0000 (0:00:00.789) 0:00:47.271 ******** 2025-06-11 14:56:23.751324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:56:23.751338 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.751348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.751359 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-11 14:56:23.751370 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-11 14:56:23.751380 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-11 14:56:23.751392 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-11 14:56:23.751402 | orchestrator | 2025-06-11 14:56:23.751413 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.751424 | orchestrator | Wednesday 11 June 2025 14:46:02 +0000 (0:00:02.116) 0:00:49.388 ******** 2025-06-11 14:56:23.751435 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.751446 | orchestrator | 2025-06-11 14:56:23.751457 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.751467 | orchestrator | Wednesday 11 June 2025 14:46:03 +0000 (0:00:01.235) 0:00:50.623 ******** 2025-06-11 14:56:23.751478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.751487 | orchestrator | 2025-06-11 14:56:23.751496 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.751506 | orchestrator | Wednesday 11 June 2025 14:46:04 +0000 (0:00:01.030) 0:00:51.653 ******** 2025-06-11 14:56:23.751515 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.751525 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.751534 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.751543 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.751553 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.751562 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.751571 | orchestrator | 2025-06-11 14:56:23.751581 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.751598 | orchestrator | Wednesday 11 June 2025 14:46:05 +0000 (0:00:01.433) 0:00:53.087 ******** 2025-06-11 14:56:23.751608 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.751617 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.751626 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.751636 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.751645 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.751655 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.751664 | orchestrator | 2025-06-11 14:56:23.751674 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.751683 | orchestrator | Wednesday 11 June 2025 14:46:07 +0000 (0:00:01.404) 0:00:54.491 ******** 2025-06-11 14:56:23.751693 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.751702 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.751712 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.751721 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.751730 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.751740 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.751749 | orchestrator | 2025-06-11 14:56:23.751759 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.751768 | orchestrator | Wednesday 11 June 2025 14:46:08 +0000 (0:00:00.921) 0:00:55.413 ******** 2025-06-11 14:56:23.751778 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.751787 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.751796 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.751806 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.751815 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.751824 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.751834 | orchestrator | 2025-06-11 14:56:23.751843 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.751853 | orchestrator | Wednesday 11 June 2025 14:46:09 +0000 (0:00:00.930) 0:00:56.344 ******** 2025-06-11 14:56:23.751863 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.751872 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.751882 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.751891 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.751900 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.751910 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.751919 | orchestrator | 2025-06-11 14:56:23.751929 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.751946 | orchestrator | Wednesday 11 June 2025 14:46:10 +0000 (0:00:01.359) 0:00:57.703 ******** 2025-06-11 14:56:23.751956 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.751971 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.751981 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.751991 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752000 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752009 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752019 | orchestrator | 2025-06-11 14:56:23.752028 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.752038 | orchestrator | Wednesday 11 June 2025 14:46:11 +0000 (0:00:00.719) 0:00:58.423 ******** 2025-06-11 14:56:23.752047 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752057 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752066 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752076 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752085 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752094 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752104 | orchestrator | 2025-06-11 14:56:23.752113 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.752123 | orchestrator | Wednesday 11 June 2025 14:46:11 +0000 (0:00:00.642) 0:00:59.065 ******** 2025-06-11 14:56:23.752133 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.752143 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.752159 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.752168 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.752178 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.752187 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.752196 | orchestrator | 2025-06-11 14:56:23.752206 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.752215 | orchestrator | Wednesday 11 June 2025 14:46:12 +0000 (0:00:00.982) 0:01:00.048 ******** 2025-06-11 14:56:23.752225 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.752234 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.752244 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.752253 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.752262 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.752286 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.752296 | orchestrator | 2025-06-11 14:56:23.752305 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.752315 | orchestrator | Wednesday 11 June 2025 14:46:14 +0000 (0:00:01.245) 0:01:01.293 ******** 2025-06-11 14:56:23.752324 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752334 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752343 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752353 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752362 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752372 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752381 | orchestrator | 2025-06-11 14:56:23.752391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.752400 | orchestrator | Wednesday 11 June 2025 14:46:14 +0000 (0:00:00.595) 0:01:01.888 ******** 2025-06-11 14:56:23.752410 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752419 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752428 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752438 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.752447 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.752457 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.752466 | orchestrator | 2025-06-11 14:56:23.752476 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.752485 | orchestrator | Wednesday 11 June 2025 14:46:15 +0000 (0:00:00.891) 0:01:02.779 ******** 2025-06-11 14:56:23.752495 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.752504 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.752514 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.752523 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752533 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752542 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752551 | orchestrator | 2025-06-11 14:56:23.752561 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.752570 | orchestrator | Wednesday 11 June 2025 14:46:16 +0000 (0:00:00.813) 0:01:03.593 ******** 2025-06-11 14:56:23.752580 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.752589 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.752599 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.752608 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752617 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752627 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752636 | orchestrator | 2025-06-11 14:56:23.752645 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.752655 | orchestrator | Wednesday 11 June 2025 14:46:17 +0000 (0:00:00.884) 0:01:04.477 ******** 2025-06-11 14:56:23.752664 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.752674 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.752683 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.752692 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752702 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752711 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752727 | orchestrator | 2025-06-11 14:56:23.752736 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.752746 | orchestrator | Wednesday 11 June 2025 14:46:17 +0000 (0:00:00.673) 0:01:05.150 ******** 2025-06-11 14:56:23.752755 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752765 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752774 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752783 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752793 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752802 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752811 | orchestrator | 2025-06-11 14:56:23.752821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.752830 | orchestrator | Wednesday 11 June 2025 14:46:18 +0000 (0:00:00.859) 0:01:06.010 ******** 2025-06-11 14:56:23.752840 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752849 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752858 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752868 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.752877 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.752887 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.752896 | orchestrator | 2025-06-11 14:56:23.752912 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.752926 | orchestrator | Wednesday 11 June 2025 14:46:19 +0000 (0:00:00.670) 0:01:06.680 ******** 2025-06-11 14:56:23.752936 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.752945 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.752954 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.752964 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.752973 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.752983 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.752992 | orchestrator | 2025-06-11 14:56:23.753002 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.753012 | orchestrator | Wednesday 11 June 2025 14:46:20 +0000 (0:00:00.845) 0:01:07.526 ******** 2025-06-11 14:56:23.753021 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.753031 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.753040 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.753050 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.753060 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.753070 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.753079 | orchestrator | 2025-06-11 14:56:23.753089 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.753098 | orchestrator | Wednesday 11 June 2025 14:46:21 +0000 (0:00:00.655) 0:01:08.181 ******** 2025-06-11 14:56:23.753107 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.753117 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.753126 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.753136 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.753145 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.753154 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.753163 | orchestrator | 2025-06-11 14:56:23.753173 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-11 14:56:23.753182 | orchestrator | Wednesday 11 June 2025 14:46:22 +0000 (0:00:01.275) 0:01:09.457 ******** 2025-06-11 14:56:23.753192 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.753201 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.753211 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.753220 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.753229 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.753239 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.753248 | orchestrator | 2025-06-11 14:56:23.753257 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-11 14:56:23.753267 | orchestrator | Wednesday 11 June 2025 14:46:23 +0000 (0:00:01.592) 0:01:11.049 ******** 2025-06-11 14:56:23.753292 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.753308 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.753318 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.753328 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.753337 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.753346 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.753356 | orchestrator | 2025-06-11 14:56:23.753365 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-11 14:56:23.753375 | orchestrator | Wednesday 11 June 2025 14:46:25 +0000 (0:00:01.962) 0:01:13.012 ******** 2025-06-11 14:56:23.753385 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.753395 | orchestrator | 2025-06-11 14:56:23.753405 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-11 14:56:23.753414 | orchestrator | Wednesday 11 June 2025 14:46:27 +0000 (0:00:01.276) 0:01:14.289 ******** 2025-06-11 14:56:23.753423 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.753433 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.753442 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.753452 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.753461 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.753470 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.753480 | orchestrator | 2025-06-11 14:56:23.753489 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-11 14:56:23.753499 | orchestrator | Wednesday 11 June 2025 14:46:27 +0000 (0:00:00.781) 0:01:15.070 ******** 2025-06-11 14:56:23.753508 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.753517 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.753527 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.753536 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.753546 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.753555 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.753564 | orchestrator | 2025-06-11 14:56:23.753573 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-11 14:56:23.753583 | orchestrator | Wednesday 11 June 2025 14:46:28 +0000 (0:00:00.572) 0:01:15.642 ******** 2025-06-11 14:56:23.753592 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753602 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753611 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753621 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753630 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753639 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753649 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753658 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-11 14:56:23.753668 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753677 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753687 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753701 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-11 14:56:23.753712 | orchestrator | 2025-06-11 14:56:23.753725 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-11 14:56:23.753735 | orchestrator | Wednesday 11 June 2025 14:46:29 +0000 (0:00:01.408) 0:01:17.051 ******** 2025-06-11 14:56:23.753744 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.753759 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.753769 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.753778 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.753788 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.753797 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.753806 | orchestrator | 2025-06-11 14:56:23.753815 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-11 14:56:23.753825 | orchestrator | Wednesday 11 June 2025 14:46:30 +0000 (0:00:00.881) 0:01:17.933 ******** 2025-06-11 14:56:23.753834 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.753844 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.753853 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.753862 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.753872 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.753881 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.753891 | orchestrator | 2025-06-11 14:56:23.753901 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-11 14:56:23.753910 | orchestrator | Wednesday 11 June 2025 14:46:31 +0000 (0:00:00.824) 0:01:18.757 ******** 2025-06-11 14:56:23.753920 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.753930 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.753939 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.753949 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.753958 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.753968 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.753978 | orchestrator | 2025-06-11 14:56:23.753987 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-11 14:56:23.753997 | orchestrator | Wednesday 11 June 2025 14:46:32 +0000 (0:00:00.569) 0:01:19.327 ******** 2025-06-11 14:56:23.754007 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754042 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754054 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754064 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754073 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754083 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754092 | orchestrator | 2025-06-11 14:56:23.754102 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-11 14:56:23.754111 | orchestrator | Wednesday 11 June 2025 14:46:32 +0000 (0:00:00.818) 0:01:20.145 ******** 2025-06-11 14:56:23.754121 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.754131 | orchestrator | 2025-06-11 14:56:23.754140 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-11 14:56:23.754150 | orchestrator | Wednesday 11 June 2025 14:46:34 +0000 (0:00:01.186) 0:01:21.331 ******** 2025-06-11 14:56:23.754159 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.754169 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.754179 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.754188 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.754198 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.754207 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.754217 | orchestrator | 2025-06-11 14:56:23.754227 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-11 14:56:23.754237 | orchestrator | Wednesday 11 June 2025 14:47:47 +0000 (0:01:13.742) 0:02:35.073 ******** 2025-06-11 14:56:23.754246 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754256 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754265 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754291 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754302 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754318 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754329 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754338 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754348 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754357 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754367 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754376 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754385 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754395 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754404 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754414 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754424 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754433 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754443 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754452 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754462 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-11 14:56:23.754477 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-11 14:56:23.754487 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-11 14:56:23.754506 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754516 | orchestrator | 2025-06-11 14:56:23.754526 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-11 14:56:23.754535 | orchestrator | Wednesday 11 June 2025 14:47:48 +0000 (0:00:00.827) 0:02:35.900 ******** 2025-06-11 14:56:23.754545 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754555 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754564 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754573 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754583 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754593 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754602 | orchestrator | 2025-06-11 14:56:23.754612 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-11 14:56:23.754622 | orchestrator | Wednesday 11 June 2025 14:47:49 +0000 (0:00:00.724) 0:02:36.625 ******** 2025-06-11 14:56:23.754631 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754641 | orchestrator | 2025-06-11 14:56:23.754650 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-11 14:56:23.754660 | orchestrator | Wednesday 11 June 2025 14:47:49 +0000 (0:00:00.160) 0:02:36.786 ******** 2025-06-11 14:56:23.754669 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754679 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754688 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754698 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754708 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754717 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754726 | orchestrator | 2025-06-11 14:56:23.754736 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-11 14:56:23.754746 | orchestrator | Wednesday 11 June 2025 14:47:50 +0000 (0:00:01.199) 0:02:37.985 ******** 2025-06-11 14:56:23.754755 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754765 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754774 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754783 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754793 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754808 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754817 | orchestrator | 2025-06-11 14:56:23.754827 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-11 14:56:23.754836 | orchestrator | Wednesday 11 June 2025 14:47:51 +0000 (0:00:00.868) 0:02:38.854 ******** 2025-06-11 14:56:23.754846 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.754855 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.754866 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.754875 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.754885 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.754895 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.754904 | orchestrator | 2025-06-11 14:56:23.754913 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-11 14:56:23.754923 | orchestrator | Wednesday 11 June 2025 14:47:52 +0000 (0:00:00.829) 0:02:39.683 ******** 2025-06-11 14:56:23.754933 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.754942 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.754952 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.754961 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.754971 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.754980 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.754990 | orchestrator | 2025-06-11 14:56:23.754999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-11 14:56:23.755009 | orchestrator | Wednesday 11 June 2025 14:47:54 +0000 (0:00:02.181) 0:02:41.864 ******** 2025-06-11 14:56:23.755019 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.755028 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.755038 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.755047 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.755057 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.755067 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.755076 | orchestrator | 2025-06-11 14:56:23.755086 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-11 14:56:23.755096 | orchestrator | Wednesday 11 June 2025 14:47:55 +0000 (0:00:00.803) 0:02:42.668 ******** 2025-06-11 14:56:23.755106 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.755116 | orchestrator | 2025-06-11 14:56:23.755126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-11 14:56:23.755135 | orchestrator | Wednesday 11 June 2025 14:47:56 +0000 (0:00:01.121) 0:02:43.790 ******** 2025-06-11 14:56:23.755145 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755154 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755163 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755173 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755182 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755191 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755201 | orchestrator | 2025-06-11 14:56:23.755210 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-11 14:56:23.755220 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:00.648) 0:02:44.438 ******** 2025-06-11 14:56:23.755229 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755239 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755248 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755257 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755267 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755316 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755326 | orchestrator | 2025-06-11 14:56:23.755336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-11 14:56:23.755345 | orchestrator | Wednesday 11 June 2025 14:47:57 +0000 (0:00:00.713) 0:02:45.152 ******** 2025-06-11 14:56:23.755355 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755365 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755380 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755390 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755399 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755414 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755422 | orchestrator | 2025-06-11 14:56:23.755430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-11 14:56:23.755443 | orchestrator | Wednesday 11 June 2025 14:47:58 +0000 (0:00:00.568) 0:02:45.720 ******** 2025-06-11 14:56:23.755451 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755458 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755466 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755474 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755482 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755489 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755497 | orchestrator | 2025-06-11 14:56:23.755505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-11 14:56:23.755513 | orchestrator | Wednesday 11 June 2025 14:47:59 +0000 (0:00:00.760) 0:02:46.481 ******** 2025-06-11 14:56:23.755520 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755528 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755536 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755544 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755551 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755559 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755567 | orchestrator | 2025-06-11 14:56:23.755575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-11 14:56:23.755583 | orchestrator | Wednesday 11 June 2025 14:47:59 +0000 (0:00:00.618) 0:02:47.100 ******** 2025-06-11 14:56:23.755591 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755598 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755606 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755614 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755621 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755629 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755637 | orchestrator | 2025-06-11 14:56:23.755645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-11 14:56:23.755653 | orchestrator | Wednesday 11 June 2025 14:48:00 +0000 (0:00:00.803) 0:02:47.903 ******** 2025-06-11 14:56:23.755660 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755668 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755676 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755683 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755691 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755699 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755706 | orchestrator | 2025-06-11 14:56:23.755714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-11 14:56:23.755722 | orchestrator | Wednesday 11 June 2025 14:48:01 +0000 (0:00:00.601) 0:02:48.505 ******** 2025-06-11 14:56:23.755730 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.755737 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.755745 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.755753 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.755761 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.755768 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.755776 | orchestrator | 2025-06-11 14:56:23.755784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-11 14:56:23.755792 | orchestrator | Wednesday 11 June 2025 14:48:02 +0000 (0:00:00.757) 0:02:49.263 ******** 2025-06-11 14:56:23.755800 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.755808 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.755816 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.755824 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.755832 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.755848 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.755856 | orchestrator | 2025-06-11 14:56:23.755864 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-11 14:56:23.755872 | orchestrator | Wednesday 11 June 2025 14:48:03 +0000 (0:00:01.113) 0:02:50.377 ******** 2025-06-11 14:56:23.755880 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.755888 | orchestrator | 2025-06-11 14:56:23.755895 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-11 14:56:23.755903 | orchestrator | Wednesday 11 June 2025 14:48:04 +0000 (0:00:00.948) 0:02:51.325 ******** 2025-06-11 14:56:23.755911 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-11 14:56:23.755919 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-11 14:56:23.755926 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-11 14:56:23.755934 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.755942 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.755949 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-11 14:56:23.755957 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-11 14:56:23.755964 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.755972 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.755980 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-11 14:56:23.755988 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.755995 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.756003 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.756011 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.756019 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-11 14:56:23.756026 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756034 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756042 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.756050 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756058 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756070 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.756078 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-11 14:56:23.756089 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756098 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756105 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756113 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756129 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-11 14:56:23.756144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756152 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756183 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756190 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-11 14:56:23.756203 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756211 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756219 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756235 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-11 14:56:23.756250 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756265 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756312 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-11 14:56:23.756319 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756349 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-11 14:56:23.756363 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756371 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756386 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-11 14:56:23.756409 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756417 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756424 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756432 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756440 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756447 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-11 14:56:23.756455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756463 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756470 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756478 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-11 14:56:23.756501 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756509 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756516 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756524 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756532 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756544 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-11 14:56:23.756566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756577 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-11 14:56:23.756585 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-11 14:56:23.756593 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756601 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-11 14:56:23.756609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-11 14:56:23.756616 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-11 14:56:23.756624 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-11 14:56:23.756632 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756639 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756647 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-11 14:56:23.756655 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-11 14:56:23.756662 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-11 14:56:23.756670 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-11 14:56:23.756678 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-11 14:56:23.756686 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-11 14:56:23.756693 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-11 14:56:23.756701 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-11 14:56:23.756709 | orchestrator | 2025-06-11 14:56:23.756716 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-11 14:56:23.756724 | orchestrator | Wednesday 11 June 2025 14:48:10 +0000 (0:00:06.727) 0:02:58.052 ******** 2025-06-11 14:56:23.756732 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.756739 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.756747 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.756755 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.756763 | orchestrator | 2025-06-11 14:56:23.756770 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-11 14:56:23.756778 | orchestrator | Wednesday 11 June 2025 14:48:12 +0000 (0:00:01.144) 0:02:59.197 ******** 2025-06-11 14:56:23.756786 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756794 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756801 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756809 | orchestrator | 2025-06-11 14:56:23.756817 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-11 14:56:23.756825 | orchestrator | Wednesday 11 June 2025 14:48:12 +0000 (0:00:00.792) 0:02:59.990 ******** 2025-06-11 14:56:23.756833 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756841 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756848 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.756861 | orchestrator | 2025-06-11 14:56:23.756869 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-11 14:56:23.756876 | orchestrator | Wednesday 11 June 2025 14:48:14 +0000 (0:00:01.466) 0:03:01.457 ******** 2025-06-11 14:56:23.756884 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.756892 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.756900 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.756908 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.756915 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.756923 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.756931 | orchestrator | 2025-06-11 14:56:23.756938 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-11 14:56:23.756946 | orchestrator | Wednesday 11 June 2025 14:48:14 +0000 (0:00:00.621) 0:03:02.078 ******** 2025-06-11 14:56:23.756954 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.756962 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.756969 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.756977 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.756984 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.756992 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757000 | orchestrator | 2025-06-11 14:56:23.757007 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-11 14:56:23.757015 | orchestrator | Wednesday 11 June 2025 14:48:15 +0000 (0:00:00.830) 0:03:02.908 ******** 2025-06-11 14:56:23.757023 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757031 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757038 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757046 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757054 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757061 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757069 | orchestrator | 2025-06-11 14:56:23.757077 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-11 14:56:23.757084 | orchestrator | Wednesday 11 June 2025 14:48:16 +0000 (0:00:00.646) 0:03:03.555 ******** 2025-06-11 14:56:23.757097 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757106 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757113 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757124 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757132 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757140 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757148 | orchestrator | 2025-06-11 14:56:23.757155 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-11 14:56:23.757163 | orchestrator | Wednesday 11 June 2025 14:48:17 +0000 (0:00:00.872) 0:03:04.427 ******** 2025-06-11 14:56:23.757171 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757178 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757186 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757194 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757201 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757209 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757217 | orchestrator | 2025-06-11 14:56:23.757225 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-11 14:56:23.757232 | orchestrator | Wednesday 11 June 2025 14:48:18 +0000 (0:00:00.734) 0:03:05.161 ******** 2025-06-11 14:56:23.757240 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757248 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757255 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757263 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757285 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757293 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757301 | orchestrator | 2025-06-11 14:56:23.757309 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-11 14:56:23.757317 | orchestrator | Wednesday 11 June 2025 14:48:18 +0000 (0:00:00.929) 0:03:06.090 ******** 2025-06-11 14:56:23.757330 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757338 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757345 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757353 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757360 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757368 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757376 | orchestrator | 2025-06-11 14:56:23.757383 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-11 14:56:23.757391 | orchestrator | Wednesday 11 June 2025 14:48:19 +0000 (0:00:00.625) 0:03:06.716 ******** 2025-06-11 14:56:23.757399 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757407 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757414 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757422 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757429 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757437 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757445 | orchestrator | 2025-06-11 14:56:23.757453 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-11 14:56:23.757460 | orchestrator | Wednesday 11 June 2025 14:48:20 +0000 (0:00:00.750) 0:03:07.466 ******** 2025-06-11 14:56:23.757468 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757476 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757484 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757492 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.757499 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.757507 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.757515 | orchestrator | 2025-06-11 14:56:23.757523 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-11 14:56:23.757531 | orchestrator | Wednesday 11 June 2025 14:48:24 +0000 (0:00:03.980) 0:03:11.446 ******** 2025-06-11 14:56:23.757538 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.757546 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.757554 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.757561 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757569 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757577 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757585 | orchestrator | 2025-06-11 14:56:23.757592 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-11 14:56:23.757600 | orchestrator | Wednesday 11 June 2025 14:48:25 +0000 (0:00:00.835) 0:03:12.282 ******** 2025-06-11 14:56:23.757608 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.757615 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.757623 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.757631 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757638 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757646 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757654 | orchestrator | 2025-06-11 14:56:23.757661 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-11 14:56:23.757669 | orchestrator | Wednesday 11 June 2025 14:48:25 +0000 (0:00:00.715) 0:03:12.997 ******** 2025-06-11 14:56:23.757677 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757685 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757692 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757700 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757708 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757715 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757723 | orchestrator | 2025-06-11 14:56:23.757730 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-11 14:56:23.757738 | orchestrator | Wednesday 11 June 2025 14:48:26 +0000 (0:00:00.960) 0:03:13.958 ******** 2025-06-11 14:56:23.757746 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.757759 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.757767 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.757775 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757782 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757790 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757798 | orchestrator | 2025-06-11 14:56:23.757810 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-11 14:56:23.757821 | orchestrator | Wednesday 11 June 2025 14:48:27 +0000 (0:00:00.760) 0:03:14.719 ******** 2025-06-11 14:56:23.757831 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-11 14:56:23.757841 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-11 14:56:23.757850 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757858 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-11 14:56:23.757866 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-11 14:56:23.757874 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757882 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-11 14:56:23.757890 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-11 14:56:23.757898 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757906 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757913 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757921 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757929 | orchestrator | 2025-06-11 14:56:23.757937 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-11 14:56:23.757944 | orchestrator | Wednesday 11 June 2025 14:48:28 +0000 (0:00:00.771) 0:03:15.491 ******** 2025-06-11 14:56:23.757952 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.757960 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.757967 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.757975 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.757983 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.757990 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.757998 | orchestrator | 2025-06-11 14:56:23.758006 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-11 14:56:23.758126 | orchestrator | Wednesday 11 June 2025 14:48:28 +0000 (0:00:00.512) 0:03:16.003 ******** 2025-06-11 14:56:23.758146 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758154 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.758162 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.758170 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758177 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758185 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758193 | orchestrator | 2025-06-11 14:56:23.758201 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-11 14:56:23.758208 | orchestrator | Wednesday 11 June 2025 14:48:29 +0000 (0:00:00.588) 0:03:16.592 ******** 2025-06-11 14:56:23.758216 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758224 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.758231 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.758239 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758246 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758254 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758262 | orchestrator | 2025-06-11 14:56:23.758270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-11 14:56:23.758317 | orchestrator | Wednesday 11 June 2025 14:48:29 +0000 (0:00:00.460) 0:03:17.053 ******** 2025-06-11 14:56:23.758325 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758333 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.758341 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.758348 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758356 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758364 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758371 | orchestrator | 2025-06-11 14:56:23.758379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-11 14:56:23.758387 | orchestrator | Wednesday 11 June 2025 14:48:30 +0000 (0:00:00.626) 0:03:17.679 ******** 2025-06-11 14:56:23.758395 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758435 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.758444 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.758452 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758469 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758477 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758485 | orchestrator | 2025-06-11 14:56:23.758493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-11 14:56:23.758501 | orchestrator | Wednesday 11 June 2025 14:48:31 +0000 (0:00:00.483) 0:03:18.162 ******** 2025-06-11 14:56:23.758508 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.758516 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.758524 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758532 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.758539 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758547 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758554 | orchestrator | 2025-06-11 14:56:23.758562 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-11 14:56:23.758570 | orchestrator | Wednesday 11 June 2025 14:48:31 +0000 (0:00:00.768) 0:03:18.930 ******** 2025-06-11 14:56:23.758577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.758585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.758593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.758601 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758608 | orchestrator | 2025-06-11 14:56:23.758616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-11 14:56:23.758624 | orchestrator | Wednesday 11 June 2025 14:48:32 +0000 (0:00:00.346) 0:03:19.277 ******** 2025-06-11 14:56:23.758632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.758646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.758654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.758662 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758669 | orchestrator | 2025-06-11 14:56:23.758677 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-11 14:56:23.758685 | orchestrator | Wednesday 11 June 2025 14:48:32 +0000 (0:00:00.369) 0:03:19.647 ******** 2025-06-11 14:56:23.758692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.758700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.758708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.758716 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.758723 | orchestrator | 2025-06-11 14:56:23.758731 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-11 14:56:23.758739 | orchestrator | Wednesday 11 June 2025 14:48:32 +0000 (0:00:00.358) 0:03:20.005 ******** 2025-06-11 14:56:23.758747 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.758755 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.758762 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.758770 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758778 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758785 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758793 | orchestrator | 2025-06-11 14:56:23.758800 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-11 14:56:23.758808 | orchestrator | Wednesday 11 June 2025 14:48:33 +0000 (0:00:00.515) 0:03:20.521 ******** 2025-06-11 14:56:23.758816 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-11 14:56:23.758824 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-11 14:56:23.758831 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.758839 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-11 14:56:23.758847 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.758854 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-11 14:56:23.758862 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-11 14:56:23.758868 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.758875 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-11 14:56:23.758881 | orchestrator | 2025-06-11 14:56:23.758888 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-11 14:56:23.758894 | orchestrator | Wednesday 11 June 2025 14:48:35 +0000 (0:00:01.734) 0:03:22.256 ******** 2025-06-11 14:56:23.758901 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.758908 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.758915 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.758921 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.758928 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.758934 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.758941 | orchestrator | 2025-06-11 14:56:23.758947 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.758954 | orchestrator | Wednesday 11 June 2025 14:48:37 +0000 (0:00:02.756) 0:03:25.012 ******** 2025-06-11 14:56:23.758960 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.758966 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.758973 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.758979 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.758986 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.758992 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.758999 | orchestrator | 2025-06-11 14:56:23.759005 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-11 14:56:23.759012 | orchestrator | Wednesday 11 June 2025 14:48:38 +0000 (0:00:00.932) 0:03:25.945 ******** 2025-06-11 14:56:23.759018 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759025 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.759031 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.759042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.759049 | orchestrator | 2025-06-11 14:56:23.759055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-11 14:56:23.759062 | orchestrator | Wednesday 11 June 2025 14:48:39 +0000 (0:00:00.893) 0:03:26.839 ******** 2025-06-11 14:56:23.759069 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.759075 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.759082 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.759088 | orchestrator | 2025-06-11 14:56:23.759115 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-11 14:56:23.759126 | orchestrator | Wednesday 11 June 2025 14:48:39 +0000 (0:00:00.277) 0:03:27.117 ******** 2025-06-11 14:56:23.759133 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.759140 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.759146 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.759153 | orchestrator | 2025-06-11 14:56:23.759159 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-11 14:56:23.759166 | orchestrator | Wednesday 11 June 2025 14:48:41 +0000 (0:00:01.476) 0:03:28.594 ******** 2025-06-11 14:56:23.759172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:56:23.759179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:56:23.759185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:56:23.759192 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.759198 | orchestrator | 2025-06-11 14:56:23.759205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-11 14:56:23.759211 | orchestrator | Wednesday 11 June 2025 14:48:42 +0000 (0:00:00.646) 0:03:29.240 ******** 2025-06-11 14:56:23.759218 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.759225 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.759231 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.759238 | orchestrator | 2025-06-11 14:56:23.759245 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-11 14:56:23.759251 | orchestrator | Wednesday 11 June 2025 14:48:42 +0000 (0:00:00.305) 0:03:29.546 ******** 2025-06-11 14:56:23.759258 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.759264 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.759283 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.759291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.759297 | orchestrator | 2025-06-11 14:56:23.759304 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-11 14:56:23.759311 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:00.857) 0:03:30.404 ******** 2025-06-11 14:56:23.759318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.759324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.759331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.759338 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759344 | orchestrator | 2025-06-11 14:56:23.759351 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-11 14:56:23.759358 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:00.376) 0:03:30.780 ******** 2025-06-11 14:56:23.759364 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759371 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.759377 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.759384 | orchestrator | 2025-06-11 14:56:23.759391 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-11 14:56:23.759397 | orchestrator | Wednesday 11 June 2025 14:48:43 +0000 (0:00:00.321) 0:03:31.102 ******** 2025-06-11 14:56:23.759404 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759411 | orchestrator | 2025-06-11 14:56:23.759423 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-11 14:56:23.759430 | orchestrator | Wednesday 11 June 2025 14:48:44 +0000 (0:00:00.222) 0:03:31.325 ******** 2025-06-11 14:56:23.759437 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759443 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.759450 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.759456 | orchestrator | 2025-06-11 14:56:23.759463 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-11 14:56:23.759470 | orchestrator | Wednesday 11 June 2025 14:48:44 +0000 (0:00:00.306) 0:03:31.632 ******** 2025-06-11 14:56:23.759476 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759483 | orchestrator | 2025-06-11 14:56:23.759489 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-11 14:56:23.759496 | orchestrator | Wednesday 11 June 2025 14:48:44 +0000 (0:00:00.207) 0:03:31.839 ******** 2025-06-11 14:56:23.759502 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759509 | orchestrator | 2025-06-11 14:56:23.759515 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-11 14:56:23.759522 | orchestrator | Wednesday 11 June 2025 14:48:44 +0000 (0:00:00.221) 0:03:32.060 ******** 2025-06-11 14:56:23.759528 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759535 | orchestrator | 2025-06-11 14:56:23.759541 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-11 14:56:23.759548 | orchestrator | Wednesday 11 June 2025 14:48:45 +0000 (0:00:00.339) 0:03:32.400 ******** 2025-06-11 14:56:23.759555 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759561 | orchestrator | 2025-06-11 14:56:23.759568 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-11 14:56:23.759574 | orchestrator | Wednesday 11 June 2025 14:48:45 +0000 (0:00:00.291) 0:03:32.692 ******** 2025-06-11 14:56:23.759581 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759587 | orchestrator | 2025-06-11 14:56:23.759594 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-11 14:56:23.759600 | orchestrator | Wednesday 11 June 2025 14:48:45 +0000 (0:00:00.290) 0:03:32.983 ******** 2025-06-11 14:56:23.759607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.759614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.759620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.759627 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759633 | orchestrator | 2025-06-11 14:56:23.759640 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-11 14:56:23.759647 | orchestrator | Wednesday 11 June 2025 14:48:46 +0000 (0:00:00.471) 0:03:33.454 ******** 2025-06-11 14:56:23.759653 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759680 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.759689 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.759695 | orchestrator | 2025-06-11 14:56:23.759706 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-11 14:56:23.759713 | orchestrator | Wednesday 11 June 2025 14:48:46 +0000 (0:00:00.296) 0:03:33.751 ******** 2025-06-11 14:56:23.759719 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759726 | orchestrator | 2025-06-11 14:56:23.759732 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-11 14:56:23.759739 | orchestrator | Wednesday 11 June 2025 14:48:46 +0000 (0:00:00.226) 0:03:33.978 ******** 2025-06-11 14:56:23.759745 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759752 | orchestrator | 2025-06-11 14:56:23.759758 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-11 14:56:23.759765 | orchestrator | Wednesday 11 June 2025 14:48:47 +0000 (0:00:00.222) 0:03:34.200 ******** 2025-06-11 14:56:23.759771 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.759778 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.759784 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.759795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.759802 | orchestrator | 2025-06-11 14:56:23.759809 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-11 14:56:23.759815 | orchestrator | Wednesday 11 June 2025 14:48:48 +0000 (0:00:01.021) 0:03:35.221 ******** 2025-06-11 14:56:23.759822 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.759828 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.759835 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.759841 | orchestrator | 2025-06-11 14:56:23.759848 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-11 14:56:23.759854 | orchestrator | Wednesday 11 June 2025 14:48:48 +0000 (0:00:00.317) 0:03:35.539 ******** 2025-06-11 14:56:23.759861 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.759868 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.759874 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.759881 | orchestrator | 2025-06-11 14:56:23.759887 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-11 14:56:23.759894 | orchestrator | Wednesday 11 June 2025 14:48:49 +0000 (0:00:01.268) 0:03:36.808 ******** 2025-06-11 14:56:23.759901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.759907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.759914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.759921 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.759927 | orchestrator | 2025-06-11 14:56:23.759933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-11 14:56:23.759940 | orchestrator | Wednesday 11 June 2025 14:48:50 +0000 (0:00:01.162) 0:03:37.970 ******** 2025-06-11 14:56:23.759947 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.759953 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.759960 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.759966 | orchestrator | 2025-06-11 14:56:23.759973 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-11 14:56:23.759979 | orchestrator | Wednesday 11 June 2025 14:48:51 +0000 (0:00:00.396) 0:03:38.366 ******** 2025-06-11 14:56:23.759986 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.759993 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.759999 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.760012 | orchestrator | 2025-06-11 14:56:23.760019 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-11 14:56:23.760025 | orchestrator | Wednesday 11 June 2025 14:48:52 +0000 (0:00:01.210) 0:03:39.577 ******** 2025-06-11 14:56:23.760032 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.760039 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.760045 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.760063 | orchestrator | 2025-06-11 14:56:23.760070 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-11 14:56:23.760077 | orchestrator | Wednesday 11 June 2025 14:48:52 +0000 (0:00:00.338) 0:03:39.915 ******** 2025-06-11 14:56:23.760083 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.760090 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.760096 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.760102 | orchestrator | 2025-06-11 14:56:23.760109 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-11 14:56:23.760116 | orchestrator | Wednesday 11 June 2025 14:48:53 +0000 (0:00:01.190) 0:03:41.106 ******** 2025-06-11 14:56:23.760122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.760129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.760135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.760146 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.760153 | orchestrator | 2025-06-11 14:56:23.760160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-11 14:56:23.760166 | orchestrator | Wednesday 11 June 2025 14:48:54 +0000 (0:00:00.727) 0:03:41.834 ******** 2025-06-11 14:56:23.760173 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.760179 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.760186 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.760192 | orchestrator | 2025-06-11 14:56:23.760199 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-11 14:56:23.760205 | orchestrator | Wednesday 11 June 2025 14:48:54 +0000 (0:00:00.279) 0:03:42.114 ******** 2025-06-11 14:56:23.760212 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.760219 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.760225 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.760232 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760238 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760245 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760251 | orchestrator | 2025-06-11 14:56:23.760258 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-11 14:56:23.760298 | orchestrator | Wednesday 11 June 2025 14:48:55 +0000 (0:00:00.728) 0:03:42.842 ******** 2025-06-11 14:56:23.760310 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.760317 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.760323 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.760330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.760337 | orchestrator | 2025-06-11 14:56:23.760344 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-11 14:56:23.760350 | orchestrator | Wednesday 11 June 2025 14:48:56 +0000 (0:00:00.965) 0:03:43.807 ******** 2025-06-11 14:56:23.760357 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760364 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760370 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760377 | orchestrator | 2025-06-11 14:56:23.760384 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-11 14:56:23.760390 | orchestrator | Wednesday 11 June 2025 14:48:56 +0000 (0:00:00.332) 0:03:44.140 ******** 2025-06-11 14:56:23.760397 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.760403 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.760410 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.760416 | orchestrator | 2025-06-11 14:56:23.760423 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-11 14:56:23.760429 | orchestrator | Wednesday 11 June 2025 14:48:58 +0000 (0:00:01.119) 0:03:45.259 ******** 2025-06-11 14:56:23.760436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:56:23.760443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:56:23.760450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:56:23.760456 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760463 | orchestrator | 2025-06-11 14:56:23.760469 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-11 14:56:23.760476 | orchestrator | Wednesday 11 June 2025 14:48:58 +0000 (0:00:00.691) 0:03:45.951 ******** 2025-06-11 14:56:23.760483 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760489 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760496 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760502 | orchestrator | 2025-06-11 14:56:23.760509 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-11 14:56:23.760516 | orchestrator | 2025-06-11 14:56:23.760522 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.760529 | orchestrator | Wednesday 11 June 2025 14:48:59 +0000 (0:00:00.769) 0:03:46.721 ******** 2025-06-11 14:56:23.760536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.760548 | orchestrator | 2025-06-11 14:56:23.760555 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.760561 | orchestrator | Wednesday 11 June 2025 14:49:00 +0000 (0:00:00.557) 0:03:47.279 ******** 2025-06-11 14:56:23.760568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.760574 | orchestrator | 2025-06-11 14:56:23.760581 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.760588 | orchestrator | Wednesday 11 June 2025 14:49:00 +0000 (0:00:00.806) 0:03:48.085 ******** 2025-06-11 14:56:23.760595 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760601 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760608 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760614 | orchestrator | 2025-06-11 14:56:23.760621 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.760627 | orchestrator | Wednesday 11 June 2025 14:49:01 +0000 (0:00:00.751) 0:03:48.837 ******** 2025-06-11 14:56:23.760634 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760641 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760647 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760654 | orchestrator | 2025-06-11 14:56:23.760660 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.760667 | orchestrator | Wednesday 11 June 2025 14:49:02 +0000 (0:00:00.355) 0:03:49.192 ******** 2025-06-11 14:56:23.760673 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760680 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760686 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760693 | orchestrator | 2025-06-11 14:56:23.760699 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.760706 | orchestrator | Wednesday 11 June 2025 14:49:02 +0000 (0:00:00.341) 0:03:49.534 ******** 2025-06-11 14:56:23.760713 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760719 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760726 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760732 | orchestrator | 2025-06-11 14:56:23.760739 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.760746 | orchestrator | Wednesday 11 June 2025 14:49:02 +0000 (0:00:00.582) 0:03:50.116 ******** 2025-06-11 14:56:23.760753 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760759 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760766 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760773 | orchestrator | 2025-06-11 14:56:23.760780 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.760787 | orchestrator | Wednesday 11 June 2025 14:49:03 +0000 (0:00:00.757) 0:03:50.874 ******** 2025-06-11 14:56:23.760794 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760800 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760806 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760813 | orchestrator | 2025-06-11 14:56:23.760820 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.760826 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:00.301) 0:03:51.176 ******** 2025-06-11 14:56:23.760834 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.760845 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.760856 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.760868 | orchestrator | 2025-06-11 14:56:23.760906 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.760919 | orchestrator | Wednesday 11 June 2025 14:49:04 +0000 (0:00:00.279) 0:03:51.456 ******** 2025-06-11 14:56:23.760926 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760932 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760939 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760951 | orchestrator | 2025-06-11 14:56:23.760958 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.760964 | orchestrator | Wednesday 11 June 2025 14:49:05 +0000 (0:00:01.069) 0:03:52.525 ******** 2025-06-11 14:56:23.760971 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.760978 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.760984 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.760990 | orchestrator | 2025-06-11 14:56:23.760997 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.761004 | orchestrator | Wednesday 11 June 2025 14:49:06 +0000 (0:00:00.833) 0:03:53.358 ******** 2025-06-11 14:56:23.761010 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761017 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761023 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761030 | orchestrator | 2025-06-11 14:56:23.761037 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.761043 | orchestrator | Wednesday 11 June 2025 14:49:06 +0000 (0:00:00.296) 0:03:53.655 ******** 2025-06-11 14:56:23.761050 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761056 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761063 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761069 | orchestrator | 2025-06-11 14:56:23.761076 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.761083 | orchestrator | Wednesday 11 June 2025 14:49:06 +0000 (0:00:00.320) 0:03:53.976 ******** 2025-06-11 14:56:23.761089 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761096 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761102 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761109 | orchestrator | 2025-06-11 14:56:23.761115 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.761122 | orchestrator | Wednesday 11 June 2025 14:49:07 +0000 (0:00:00.434) 0:03:54.410 ******** 2025-06-11 14:56:23.761128 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761135 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761141 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761148 | orchestrator | 2025-06-11 14:56:23.761155 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.761161 | orchestrator | Wednesday 11 June 2025 14:49:07 +0000 (0:00:00.293) 0:03:54.704 ******** 2025-06-11 14:56:23.761168 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761174 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761181 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761187 | orchestrator | 2025-06-11 14:56:23.761194 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.761201 | orchestrator | Wednesday 11 June 2025 14:49:07 +0000 (0:00:00.293) 0:03:54.998 ******** 2025-06-11 14:56:23.761207 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761214 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761220 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761227 | orchestrator | 2025-06-11 14:56:23.761233 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.761240 | orchestrator | Wednesday 11 June 2025 14:49:08 +0000 (0:00:00.283) 0:03:55.281 ******** 2025-06-11 14:56:23.761246 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761253 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.761259 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.761266 | orchestrator | 2025-06-11 14:56:23.761288 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.761296 | orchestrator | Wednesday 11 June 2025 14:49:08 +0000 (0:00:00.475) 0:03:55.757 ******** 2025-06-11 14:56:23.761303 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761310 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761316 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761323 | orchestrator | 2025-06-11 14:56:23.761330 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.761342 | orchestrator | Wednesday 11 June 2025 14:49:08 +0000 (0:00:00.360) 0:03:56.118 ******** 2025-06-11 14:56:23.761349 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761355 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761361 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761368 | orchestrator | 2025-06-11 14:56:23.761374 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.761381 | orchestrator | Wednesday 11 June 2025 14:49:09 +0000 (0:00:00.350) 0:03:56.468 ******** 2025-06-11 14:56:23.761387 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761394 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761400 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761407 | orchestrator | 2025-06-11 14:56:23.761413 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-11 14:56:23.761420 | orchestrator | Wednesday 11 June 2025 14:49:09 +0000 (0:00:00.650) 0:03:57.119 ******** 2025-06-11 14:56:23.761426 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761433 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761439 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761446 | orchestrator | 2025-06-11 14:56:23.761452 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-11 14:56:23.761459 | orchestrator | Wednesday 11 June 2025 14:49:10 +0000 (0:00:00.355) 0:03:57.474 ******** 2025-06-11 14:56:23.761465 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.761472 | orchestrator | 2025-06-11 14:56:23.761479 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-11 14:56:23.761485 | orchestrator | Wednesday 11 June 2025 14:49:10 +0000 (0:00:00.618) 0:03:58.093 ******** 2025-06-11 14:56:23.761492 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.761498 | orchestrator | 2025-06-11 14:56:23.761505 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-11 14:56:23.761532 | orchestrator | Wednesday 11 June 2025 14:49:11 +0000 (0:00:00.170) 0:03:58.263 ******** 2025-06-11 14:56:23.761540 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-11 14:56:23.761546 | orchestrator | 2025-06-11 14:56:23.761557 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-11 14:56:23.761563 | orchestrator | Wednesday 11 June 2025 14:49:12 +0000 (0:00:01.539) 0:03:59.803 ******** 2025-06-11 14:56:23.761570 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761576 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761583 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761589 | orchestrator | 2025-06-11 14:56:23.761596 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-11 14:56:23.761602 | orchestrator | Wednesday 11 June 2025 14:49:13 +0000 (0:00:00.479) 0:04:00.282 ******** 2025-06-11 14:56:23.761609 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761615 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761621 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761628 | orchestrator | 2025-06-11 14:56:23.761634 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-11 14:56:23.761641 | orchestrator | Wednesday 11 June 2025 14:49:13 +0000 (0:00:00.404) 0:04:00.687 ******** 2025-06-11 14:56:23.761647 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.761654 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.761660 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.761667 | orchestrator | 2025-06-11 14:56:23.761673 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-11 14:56:23.761680 | orchestrator | Wednesday 11 June 2025 14:49:14 +0000 (0:00:01.171) 0:04:01.859 ******** 2025-06-11 14:56:23.761687 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.761693 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.761699 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.761706 | orchestrator | 2025-06-11 14:56:23.761712 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-11 14:56:23.761723 | orchestrator | Wednesday 11 June 2025 14:49:15 +0000 (0:00:00.929) 0:04:02.788 ******** 2025-06-11 14:56:23.761730 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.761736 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.761743 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.761749 | orchestrator | 2025-06-11 14:56:23.761756 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-11 14:56:23.761762 | orchestrator | Wednesday 11 June 2025 14:49:16 +0000 (0:00:00.614) 0:04:03.402 ******** 2025-06-11 14:56:23.761769 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761776 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.761782 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.761788 | orchestrator | 2025-06-11 14:56:23.761795 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-11 14:56:23.761802 | orchestrator | Wednesday 11 June 2025 14:49:16 +0000 (0:00:00.713) 0:04:04.115 ******** 2025-06-11 14:56:23.761808 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.761815 | orchestrator | 2025-06-11 14:56:23.761821 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-11 14:56:23.761828 | orchestrator | Wednesday 11 June 2025 14:49:18 +0000 (0:00:01.257) 0:04:05.373 ******** 2025-06-11 14:56:23.761834 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.761841 | orchestrator | 2025-06-11 14:56:23.761847 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-11 14:56:23.761854 | orchestrator | Wednesday 11 June 2025 14:49:18 +0000 (0:00:00.629) 0:04:06.003 ******** 2025-06-11 14:56:23.761861 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.761867 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.761874 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.761880 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:56:23.761887 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-11 14:56:23.761893 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:56:23.761900 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:56:23.761906 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-11 14:56:23.761913 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:56:23.761919 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-11 14:56:23.761926 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-11 14:56:23.761932 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-11 14:56:23.761939 | orchestrator | 2025-06-11 14:56:23.761945 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-11 14:56:23.761952 | orchestrator | Wednesday 11 June 2025 14:49:22 +0000 (0:00:03.205) 0:04:09.208 ******** 2025-06-11 14:56:23.761958 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.761965 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.761971 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.761978 | orchestrator | 2025-06-11 14:56:23.761984 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-11 14:56:23.761991 | orchestrator | Wednesday 11 June 2025 14:49:23 +0000 (0:00:01.236) 0:04:10.445 ******** 2025-06-11 14:56:23.761997 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762004 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.762010 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.762036 | orchestrator | 2025-06-11 14:56:23.762044 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-11 14:56:23.762051 | orchestrator | Wednesday 11 June 2025 14:49:23 +0000 (0:00:00.295) 0:04:10.740 ******** 2025-06-11 14:56:23.762057 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762063 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.762070 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.762083 | orchestrator | 2025-06-11 14:56:23.762090 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-11 14:56:23.762096 | orchestrator | Wednesday 11 June 2025 14:49:23 +0000 (0:00:00.372) 0:04:11.112 ******** 2025-06-11 14:56:23.762103 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762110 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762116 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762123 | orchestrator | 2025-06-11 14:56:23.762149 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-11 14:56:23.762161 | orchestrator | Wednesday 11 June 2025 14:49:25 +0000 (0:00:02.006) 0:04:13.119 ******** 2025-06-11 14:56:23.762168 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762174 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762180 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762187 | orchestrator | 2025-06-11 14:56:23.762194 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-11 14:56:23.762200 | orchestrator | Wednesday 11 June 2025 14:49:27 +0000 (0:00:01.482) 0:04:14.602 ******** 2025-06-11 14:56:23.762207 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762214 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762220 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762227 | orchestrator | 2025-06-11 14:56:23.762233 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-11 14:56:23.762240 | orchestrator | Wednesday 11 June 2025 14:49:27 +0000 (0:00:00.255) 0:04:14.857 ******** 2025-06-11 14:56:23.762246 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.762253 | orchestrator | 2025-06-11 14:56:23.762259 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-11 14:56:23.762266 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:00.454) 0:04:15.312 ******** 2025-06-11 14:56:23.762285 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762293 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762299 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762305 | orchestrator | 2025-06-11 14:56:23.762312 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-11 14:56:23.762319 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:00.461) 0:04:15.774 ******** 2025-06-11 14:56:23.762325 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762331 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762338 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762344 | orchestrator | 2025-06-11 14:56:23.762351 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-11 14:56:23.762357 | orchestrator | Wednesday 11 June 2025 14:49:28 +0000 (0:00:00.322) 0:04:16.096 ******** 2025-06-11 14:56:23.762364 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.762370 | orchestrator | 2025-06-11 14:56:23.762377 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-11 14:56:23.762383 | orchestrator | Wednesday 11 June 2025 14:49:29 +0000 (0:00:00.461) 0:04:16.557 ******** 2025-06-11 14:56:23.762390 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762396 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762403 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762409 | orchestrator | 2025-06-11 14:56:23.762416 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-11 14:56:23.762422 | orchestrator | Wednesday 11 June 2025 14:49:31 +0000 (0:00:01.940) 0:04:18.498 ******** 2025-06-11 14:56:23.762429 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762435 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762442 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762448 | orchestrator | 2025-06-11 14:56:23.762455 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-11 14:56:23.762468 | orchestrator | Wednesday 11 June 2025 14:49:32 +0000 (0:00:01.183) 0:04:19.681 ******** 2025-06-11 14:56:23.762475 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762482 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762488 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762495 | orchestrator | 2025-06-11 14:56:23.762501 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-11 14:56:23.762508 | orchestrator | Wednesday 11 June 2025 14:49:34 +0000 (0:00:01.633) 0:04:21.315 ******** 2025-06-11 14:56:23.762514 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.762521 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.762528 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.762534 | orchestrator | 2025-06-11 14:56:23.762541 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-11 14:56:23.762547 | orchestrator | Wednesday 11 June 2025 14:49:36 +0000 (0:00:01.946) 0:04:23.262 ******** 2025-06-11 14:56:23.762554 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.762560 | orchestrator | 2025-06-11 14:56:23.762567 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-11 14:56:23.762574 | orchestrator | Wednesday 11 June 2025 14:49:36 +0000 (0:00:00.829) 0:04:24.091 ******** 2025-06-11 14:56:23.762580 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-11 14:56:23.762587 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762594 | orchestrator | 2025-06-11 14:56:23.762600 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-11 14:56:23.762607 | orchestrator | Wednesday 11 June 2025 14:49:58 +0000 (0:00:21.836) 0:04:45.927 ******** 2025-06-11 14:56:23.762613 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762620 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.762626 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.762633 | orchestrator | 2025-06-11 14:56:23.762639 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-11 14:56:23.762646 | orchestrator | Wednesday 11 June 2025 14:50:08 +0000 (0:00:09.924) 0:04:55.852 ******** 2025-06-11 14:56:23.762652 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762659 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762665 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762672 | orchestrator | 2025-06-11 14:56:23.762678 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-11 14:56:23.762685 | orchestrator | Wednesday 11 June 2025 14:50:09 +0000 (0:00:00.318) 0:04:56.170 ******** 2025-06-11 14:56:23.762716 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-11 14:56:23.762727 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-11 14:56:23.762735 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-11 14:56:23.762743 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-11 14:56:23.762755 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-11 14:56:23.762763 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b6380f90c55698bb9fb2257d6aa71b2fa6afd1fc'}])  2025-06-11 14:56:23.762770 | orchestrator | 2025-06-11 14:56:23.762777 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.762783 | orchestrator | Wednesday 11 June 2025 14:50:24 +0000 (0:00:15.195) 0:05:11.366 ******** 2025-06-11 14:56:23.762790 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762796 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762803 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762809 | orchestrator | 2025-06-11 14:56:23.762816 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-11 14:56:23.762822 | orchestrator | Wednesday 11 June 2025 14:50:24 +0000 (0:00:00.347) 0:05:11.713 ******** 2025-06-11 14:56:23.762829 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.762835 | orchestrator | 2025-06-11 14:56:23.762842 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-11 14:56:23.762848 | orchestrator | Wednesday 11 June 2025 14:50:25 +0000 (0:00:00.769) 0:05:12.482 ******** 2025-06-11 14:56:23.762854 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762861 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.762867 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.762874 | orchestrator | 2025-06-11 14:56:23.762881 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-11 14:56:23.762887 | orchestrator | Wednesday 11 June 2025 14:50:25 +0000 (0:00:00.381) 0:05:12.864 ******** 2025-06-11 14:56:23.762893 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762900 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.762906 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.762913 | orchestrator | 2025-06-11 14:56:23.762919 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-11 14:56:23.762926 | orchestrator | Wednesday 11 June 2025 14:50:26 +0000 (0:00:00.321) 0:05:13.185 ******** 2025-06-11 14:56:23.762933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:56:23.762939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:56:23.762946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:56:23.762952 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.762959 | orchestrator | 2025-06-11 14:56:23.762965 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-11 14:56:23.762972 | orchestrator | Wednesday 11 June 2025 14:50:26 +0000 (0:00:00.922) 0:05:14.108 ******** 2025-06-11 14:56:23.762978 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.762985 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.762991 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.762998 | orchestrator | 2025-06-11 14:56:23.763022 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-11 14:56:23.763035 | orchestrator | 2025-06-11 14:56:23.763042 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.763053 | orchestrator | Wednesday 11 June 2025 14:50:27 +0000 (0:00:00.804) 0:05:14.912 ******** 2025-06-11 14:56:23.763059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.763066 | orchestrator | 2025-06-11 14:56:23.763072 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.763079 | orchestrator | Wednesday 11 June 2025 14:50:28 +0000 (0:00:00.426) 0:05:15.339 ******** 2025-06-11 14:56:23.763086 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.763092 | orchestrator | 2025-06-11 14:56:23.763098 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.763105 | orchestrator | Wednesday 11 June 2025 14:50:28 +0000 (0:00:00.591) 0:05:15.930 ******** 2025-06-11 14:56:23.763111 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763118 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763124 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763131 | orchestrator | 2025-06-11 14:56:23.763137 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.763144 | orchestrator | Wednesday 11 June 2025 14:50:29 +0000 (0:00:00.651) 0:05:16.582 ******** 2025-06-11 14:56:23.763150 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763157 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763163 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763170 | orchestrator | 2025-06-11 14:56:23.763176 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.763183 | orchestrator | Wednesday 11 June 2025 14:50:29 +0000 (0:00:00.272) 0:05:16.854 ******** 2025-06-11 14:56:23.763189 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763196 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763203 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763209 | orchestrator | 2025-06-11 14:56:23.763215 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.763222 | orchestrator | Wednesday 11 June 2025 14:50:30 +0000 (0:00:00.411) 0:05:17.265 ******** 2025-06-11 14:56:23.763228 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763235 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763241 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763248 | orchestrator | 2025-06-11 14:56:23.763254 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.763261 | orchestrator | Wednesday 11 June 2025 14:50:30 +0000 (0:00:00.298) 0:05:17.563 ******** 2025-06-11 14:56:23.763267 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763308 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763316 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763323 | orchestrator | 2025-06-11 14:56:23.763329 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.763336 | orchestrator | Wednesday 11 June 2025 14:50:31 +0000 (0:00:00.654) 0:05:18.218 ******** 2025-06-11 14:56:23.763342 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763349 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763356 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763362 | orchestrator | 2025-06-11 14:56:23.763369 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.763376 | orchestrator | Wednesday 11 June 2025 14:50:31 +0000 (0:00:00.274) 0:05:18.492 ******** 2025-06-11 14:56:23.763382 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763389 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763395 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763402 | orchestrator | 2025-06-11 14:56:23.763408 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.763415 | orchestrator | Wednesday 11 June 2025 14:50:31 +0000 (0:00:00.431) 0:05:18.924 ******** 2025-06-11 14:56:23.763427 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763433 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763440 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763446 | orchestrator | 2025-06-11 14:56:23.763453 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.763459 | orchestrator | Wednesday 11 June 2025 14:50:32 +0000 (0:00:00.681) 0:05:19.605 ******** 2025-06-11 14:56:23.763466 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763472 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763479 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763485 | orchestrator | 2025-06-11 14:56:23.763492 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.763498 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.703) 0:05:20.308 ******** 2025-06-11 14:56:23.763505 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763511 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763518 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763524 | orchestrator | 2025-06-11 14:56:23.763531 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.763537 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.250) 0:05:20.559 ******** 2025-06-11 14:56:23.763544 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763550 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763556 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763563 | orchestrator | 2025-06-11 14:56:23.763570 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.763576 | orchestrator | Wednesday 11 June 2025 14:50:33 +0000 (0:00:00.463) 0:05:21.023 ******** 2025-06-11 14:56:23.763583 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763589 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763596 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763602 | orchestrator | 2025-06-11 14:56:23.763609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.763615 | orchestrator | Wednesday 11 June 2025 14:50:34 +0000 (0:00:00.293) 0:05:21.317 ******** 2025-06-11 14:56:23.763622 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763628 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763655 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763664 | orchestrator | 2025-06-11 14:56:23.763670 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.763681 | orchestrator | Wednesday 11 June 2025 14:50:34 +0000 (0:00:00.337) 0:05:21.654 ******** 2025-06-11 14:56:23.763687 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763694 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763700 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763707 | orchestrator | 2025-06-11 14:56:23.763714 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.763720 | orchestrator | Wednesday 11 June 2025 14:50:34 +0000 (0:00:00.330) 0:05:21.985 ******** 2025-06-11 14:56:23.763727 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763733 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763740 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763746 | orchestrator | 2025-06-11 14:56:23.763753 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.763759 | orchestrator | Wednesday 11 June 2025 14:50:35 +0000 (0:00:00.573) 0:05:22.559 ******** 2025-06-11 14:56:23.763766 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.763773 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.763779 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.763786 | orchestrator | 2025-06-11 14:56:23.763792 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.763799 | orchestrator | Wednesday 11 June 2025 14:50:35 +0000 (0:00:00.331) 0:05:22.890 ******** 2025-06-11 14:56:23.763810 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763817 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763823 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763830 | orchestrator | 2025-06-11 14:56:23.763836 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.763842 | orchestrator | Wednesday 11 June 2025 14:50:36 +0000 (0:00:00.357) 0:05:23.247 ******** 2025-06-11 14:56:23.763848 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763854 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763860 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763866 | orchestrator | 2025-06-11 14:56:23.763873 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.763879 | orchestrator | Wednesday 11 June 2025 14:50:36 +0000 (0:00:00.378) 0:05:23.626 ******** 2025-06-11 14:56:23.763885 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.763891 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.763897 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.763903 | orchestrator | 2025-06-11 14:56:23.763909 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-11 14:56:23.763915 | orchestrator | Wednesday 11 June 2025 14:50:37 +0000 (0:00:00.819) 0:05:24.445 ******** 2025-06-11 14:56:23.763921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-11 14:56:23.763928 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.763934 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.763940 | orchestrator | 2025-06-11 14:56:23.763946 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-11 14:56:23.763952 | orchestrator | Wednesday 11 June 2025 14:50:37 +0000 (0:00:00.640) 0:05:25.086 ******** 2025-06-11 14:56:23.763959 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.763965 | orchestrator | 2025-06-11 14:56:23.763971 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-11 14:56:23.763977 | orchestrator | Wednesday 11 June 2025 14:50:38 +0000 (0:00:00.514) 0:05:25.601 ******** 2025-06-11 14:56:23.763983 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.763989 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.763995 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764001 | orchestrator | 2025-06-11 14:56:23.764008 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-11 14:56:23.764014 | orchestrator | Wednesday 11 June 2025 14:50:39 +0000 (0:00:00.994) 0:05:26.595 ******** 2025-06-11 14:56:23.764020 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764026 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764032 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.764038 | orchestrator | 2025-06-11 14:56:23.764044 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-11 14:56:23.764050 | orchestrator | Wednesday 11 June 2025 14:50:39 +0000 (0:00:00.360) 0:05:26.956 ******** 2025-06-11 14:56:23.764056 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.764063 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.764069 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.764075 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-11 14:56:23.764081 | orchestrator | 2025-06-11 14:56:23.764087 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-11 14:56:23.764093 | orchestrator | Wednesday 11 June 2025 14:50:50 +0000 (0:00:10.717) 0:05:37.673 ******** 2025-06-11 14:56:23.764099 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.764105 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.764111 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.764117 | orchestrator | 2025-06-11 14:56:23.764123 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-11 14:56:23.764133 | orchestrator | Wednesday 11 June 2025 14:50:50 +0000 (0:00:00.417) 0:05:38.091 ******** 2025-06-11 14:56:23.764139 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-11 14:56:23.764146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-11 14:56:23.764152 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-11 14:56:23.764158 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.764164 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.764170 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.764176 | orchestrator | 2025-06-11 14:56:23.764201 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-11 14:56:23.764208 | orchestrator | Wednesday 11 June 2025 14:50:53 +0000 (0:00:02.472) 0:05:40.564 ******** 2025-06-11 14:56:23.764218 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-11 14:56:23.764224 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-11 14:56:23.764230 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-11 14:56:23.764236 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 14:56:23.764242 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-11 14:56:23.764248 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-11 14:56:23.764254 | orchestrator | 2025-06-11 14:56:23.764260 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-11 14:56:23.764267 | orchestrator | Wednesday 11 June 2025 14:50:54 +0000 (0:00:01.579) 0:05:42.144 ******** 2025-06-11 14:56:23.764290 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.764296 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.764303 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.764309 | orchestrator | 2025-06-11 14:56:23.764315 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-11 14:56:23.764321 | orchestrator | Wednesday 11 June 2025 14:50:55 +0000 (0:00:00.708) 0:05:42.852 ******** 2025-06-11 14:56:23.764327 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764333 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764339 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.764345 | orchestrator | 2025-06-11 14:56:23.764351 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-11 14:56:23.764357 | orchestrator | Wednesday 11 June 2025 14:50:55 +0000 (0:00:00.299) 0:05:43.151 ******** 2025-06-11 14:56:23.764364 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764370 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764376 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.764382 | orchestrator | 2025-06-11 14:56:23.764388 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-11 14:56:23.764394 | orchestrator | Wednesday 11 June 2025 14:50:56 +0000 (0:00:00.303) 0:05:43.455 ******** 2025-06-11 14:56:23.764400 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.764406 | orchestrator | 2025-06-11 14:56:23.764412 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-11 14:56:23.764419 | orchestrator | Wednesday 11 June 2025 14:50:57 +0000 (0:00:00.761) 0:05:44.216 ******** 2025-06-11 14:56:23.764425 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764431 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764437 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.764443 | orchestrator | 2025-06-11 14:56:23.764449 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-11 14:56:23.764455 | orchestrator | Wednesday 11 June 2025 14:50:57 +0000 (0:00:00.312) 0:05:44.529 ******** 2025-06-11 14:56:23.764461 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764467 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764473 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.764479 | orchestrator | 2025-06-11 14:56:23.764485 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-11 14:56:23.764497 | orchestrator | Wednesday 11 June 2025 14:50:57 +0000 (0:00:00.329) 0:05:44.858 ******** 2025-06-11 14:56:23.764503 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.764509 | orchestrator | 2025-06-11 14:56:23.764515 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-11 14:56:23.764521 | orchestrator | Wednesday 11 June 2025 14:50:58 +0000 (0:00:00.767) 0:05:45.626 ******** 2025-06-11 14:56:23.764527 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.764533 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.764539 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764545 | orchestrator | 2025-06-11 14:56:23.764552 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-11 14:56:23.764557 | orchestrator | Wednesday 11 June 2025 14:50:59 +0000 (0:00:01.281) 0:05:46.907 ******** 2025-06-11 14:56:23.764563 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.764569 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.764576 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764582 | orchestrator | 2025-06-11 14:56:23.764588 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-11 14:56:23.764594 | orchestrator | Wednesday 11 June 2025 14:51:00 +0000 (0:00:01.174) 0:05:48.081 ******** 2025-06-11 14:56:23.764600 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.764606 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.764612 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764618 | orchestrator | 2025-06-11 14:56:23.764624 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-11 14:56:23.764630 | orchestrator | Wednesday 11 June 2025 14:51:03 +0000 (0:00:02.084) 0:05:50.166 ******** 2025-06-11 14:56:23.764636 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.764642 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.764648 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764654 | orchestrator | 2025-06-11 14:56:23.764660 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-11 14:56:23.764666 | orchestrator | Wednesday 11 June 2025 14:51:05 +0000 (0:00:02.021) 0:05:52.187 ******** 2025-06-11 14:56:23.764672 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.764678 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.764684 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-11 14:56:23.764690 | orchestrator | 2025-06-11 14:56:23.764696 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-11 14:56:23.764702 | orchestrator | Wednesday 11 June 2025 14:51:05 +0000 (0:00:00.388) 0:05:52.575 ******** 2025-06-11 14:56:23.764708 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-11 14:56:23.764733 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-11 14:56:23.764744 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-11 14:56:23.764750 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-11 14:56:23.764756 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-11 14:56:23.764762 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.764768 | orchestrator | 2025-06-11 14:56:23.764775 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-11 14:56:23.764781 | orchestrator | Wednesday 11 June 2025 14:51:35 +0000 (0:00:30.236) 0:06:22.811 ******** 2025-06-11 14:56:23.764786 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.764793 | orchestrator | 2025-06-11 14:56:23.764799 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-11 14:56:23.764809 | orchestrator | Wednesday 11 June 2025 14:51:37 +0000 (0:00:01.656) 0:06:24.468 ******** 2025-06-11 14:56:23.764815 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.764821 | orchestrator | 2025-06-11 14:56:23.764827 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-11 14:56:23.764833 | orchestrator | Wednesday 11 June 2025 14:51:38 +0000 (0:00:00.814) 0:06:25.283 ******** 2025-06-11 14:56:23.764839 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.764845 | orchestrator | 2025-06-11 14:56:23.764851 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-11 14:56:23.764857 | orchestrator | Wednesday 11 June 2025 14:51:38 +0000 (0:00:00.168) 0:06:25.452 ******** 2025-06-11 14:56:23.764863 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-11 14:56:23.764870 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-11 14:56:23.764876 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-11 14:56:23.764882 | orchestrator | 2025-06-11 14:56:23.764888 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-11 14:56:23.764894 | orchestrator | Wednesday 11 June 2025 14:51:44 +0000 (0:00:06.424) 0:06:31.876 ******** 2025-06-11 14:56:23.764900 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-11 14:56:23.764907 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-11 14:56:23.764913 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-11 14:56:23.764919 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-11 14:56:23.764925 | orchestrator | 2025-06-11 14:56:23.764931 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.764937 | orchestrator | Wednesday 11 June 2025 14:51:49 +0000 (0:00:04.754) 0:06:36.631 ******** 2025-06-11 14:56:23.764943 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.764949 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.764955 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.764961 | orchestrator | 2025-06-11 14:56:23.764967 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-11 14:56:23.764973 | orchestrator | Wednesday 11 June 2025 14:51:50 +0000 (0:00:00.942) 0:06:37.574 ******** 2025-06-11 14:56:23.764979 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.764985 | orchestrator | 2025-06-11 14:56:23.764991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-11 14:56:23.764997 | orchestrator | Wednesday 11 June 2025 14:51:50 +0000 (0:00:00.528) 0:06:38.103 ******** 2025-06-11 14:56:23.765003 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.765009 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.765015 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.765021 | orchestrator | 2025-06-11 14:56:23.765027 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-11 14:56:23.765033 | orchestrator | Wednesday 11 June 2025 14:51:51 +0000 (0:00:00.305) 0:06:38.408 ******** 2025-06-11 14:56:23.765040 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.765046 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.765052 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.765058 | orchestrator | 2025-06-11 14:56:23.765064 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-11 14:56:23.765070 | orchestrator | Wednesday 11 June 2025 14:51:52 +0000 (0:00:01.458) 0:06:39.867 ******** 2025-06-11 14:56:23.765076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-11 14:56:23.765082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-11 14:56:23.765088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-11 14:56:23.765094 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.765107 | orchestrator | 2025-06-11 14:56:23.765114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-11 14:56:23.765121 | orchestrator | Wednesday 11 June 2025 14:51:53 +0000 (0:00:00.638) 0:06:40.505 ******** 2025-06-11 14:56:23.765127 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.765133 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.765139 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.765145 | orchestrator | 2025-06-11 14:56:23.765151 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-11 14:56:23.765157 | orchestrator | 2025-06-11 14:56:23.765163 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.765169 | orchestrator | Wednesday 11 June 2025 14:51:53 +0000 (0:00:00.549) 0:06:41.055 ******** 2025-06-11 14:56:23.765175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.765181 | orchestrator | 2025-06-11 14:56:23.765206 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.765218 | orchestrator | Wednesday 11 June 2025 14:51:54 +0000 (0:00:00.774) 0:06:41.829 ******** 2025-06-11 14:56:23.765224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.765231 | orchestrator | 2025-06-11 14:56:23.765237 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.765243 | orchestrator | Wednesday 11 June 2025 14:51:55 +0000 (0:00:00.509) 0:06:42.339 ******** 2025-06-11 14:56:23.765249 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765255 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765261 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765267 | orchestrator | 2025-06-11 14:56:23.765286 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.765293 | orchestrator | Wednesday 11 June 2025 14:51:55 +0000 (0:00:00.300) 0:06:42.639 ******** 2025-06-11 14:56:23.765299 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765305 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765311 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765317 | orchestrator | 2025-06-11 14:56:23.765323 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.765330 | orchestrator | Wednesday 11 June 2025 14:51:56 +0000 (0:00:00.961) 0:06:43.600 ******** 2025-06-11 14:56:23.765336 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765342 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765348 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765354 | orchestrator | 2025-06-11 14:56:23.765360 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.765366 | orchestrator | Wednesday 11 June 2025 14:51:57 +0000 (0:00:00.711) 0:06:44.312 ******** 2025-06-11 14:56:23.765372 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765378 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765384 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765390 | orchestrator | 2025-06-11 14:56:23.765396 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.765402 | orchestrator | Wednesday 11 June 2025 14:51:57 +0000 (0:00:00.685) 0:06:44.997 ******** 2025-06-11 14:56:23.765408 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765414 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765420 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765426 | orchestrator | 2025-06-11 14:56:23.765432 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.765438 | orchestrator | Wednesday 11 June 2025 14:51:58 +0000 (0:00:00.295) 0:06:45.293 ******** 2025-06-11 14:56:23.765444 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765450 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765456 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765462 | orchestrator | 2025-06-11 14:56:23.765473 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.765479 | orchestrator | Wednesday 11 June 2025 14:51:58 +0000 (0:00:00.519) 0:06:45.813 ******** 2025-06-11 14:56:23.765485 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765491 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765497 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765503 | orchestrator | 2025-06-11 14:56:23.765510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.765515 | orchestrator | Wednesday 11 June 2025 14:51:58 +0000 (0:00:00.298) 0:06:46.111 ******** 2025-06-11 14:56:23.765521 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765528 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765534 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765540 | orchestrator | 2025-06-11 14:56:23.765546 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.765552 | orchestrator | Wednesday 11 June 2025 14:51:59 +0000 (0:00:00.665) 0:06:46.777 ******** 2025-06-11 14:56:23.765558 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765564 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765570 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765576 | orchestrator | 2025-06-11 14:56:23.765582 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.765588 | orchestrator | Wednesday 11 June 2025 14:52:00 +0000 (0:00:00.681) 0:06:47.458 ******** 2025-06-11 14:56:23.765594 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765600 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765606 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765612 | orchestrator | 2025-06-11 14:56:23.765618 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.765624 | orchestrator | Wednesday 11 June 2025 14:52:00 +0000 (0:00:00.526) 0:06:47.985 ******** 2025-06-11 14:56:23.765630 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765636 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765642 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765648 | orchestrator | 2025-06-11 14:56:23.765654 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.765660 | orchestrator | Wednesday 11 June 2025 14:52:01 +0000 (0:00:00.309) 0:06:48.295 ******** 2025-06-11 14:56:23.765666 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765672 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765678 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765685 | orchestrator | 2025-06-11 14:56:23.765691 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.765697 | orchestrator | Wednesday 11 June 2025 14:52:01 +0000 (0:00:00.306) 0:06:48.601 ******** 2025-06-11 14:56:23.765703 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765709 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765715 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765721 | orchestrator | 2025-06-11 14:56:23.765727 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.765734 | orchestrator | Wednesday 11 June 2025 14:52:01 +0000 (0:00:00.328) 0:06:48.930 ******** 2025-06-11 14:56:23.765740 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765746 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765752 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765758 | orchestrator | 2025-06-11 14:56:23.765767 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.765774 | orchestrator | Wednesday 11 June 2025 14:52:02 +0000 (0:00:00.559) 0:06:49.489 ******** 2025-06-11 14:56:23.765784 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765790 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765796 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765802 | orchestrator | 2025-06-11 14:56:23.765808 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.765814 | orchestrator | Wednesday 11 June 2025 14:52:02 +0000 (0:00:00.289) 0:06:49.779 ******** 2025-06-11 14:56:23.765824 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765831 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765837 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765843 | orchestrator | 2025-06-11 14:56:23.765849 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.765855 | orchestrator | Wednesday 11 June 2025 14:52:02 +0000 (0:00:00.312) 0:06:50.092 ******** 2025-06-11 14:56:23.765861 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.765867 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.765873 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.765879 | orchestrator | 2025-06-11 14:56:23.765885 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.765891 | orchestrator | Wednesday 11 June 2025 14:52:03 +0000 (0:00:00.304) 0:06:50.397 ******** 2025-06-11 14:56:23.765897 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765903 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765909 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765915 | orchestrator | 2025-06-11 14:56:23.765921 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.765927 | orchestrator | Wednesday 11 June 2025 14:52:03 +0000 (0:00:00.562) 0:06:50.960 ******** 2025-06-11 14:56:23.765933 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765939 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765945 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765951 | orchestrator | 2025-06-11 14:56:23.765957 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-11 14:56:23.765963 | orchestrator | Wednesday 11 June 2025 14:52:04 +0000 (0:00:00.517) 0:06:51.478 ******** 2025-06-11 14:56:23.765969 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.765976 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.765981 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.765987 | orchestrator | 2025-06-11 14:56:23.765994 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-11 14:56:23.766000 | orchestrator | Wednesday 11 June 2025 14:52:04 +0000 (0:00:00.314) 0:06:51.793 ******** 2025-06-11 14:56:23.766006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:56:23.766012 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:56:23.766035 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:56:23.766041 | orchestrator | 2025-06-11 14:56:23.766047 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-11 14:56:23.766054 | orchestrator | Wednesday 11 June 2025 14:52:05 +0000 (0:00:00.890) 0:06:52.683 ******** 2025-06-11 14:56:23.766060 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.766066 | orchestrator | 2025-06-11 14:56:23.766072 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-11 14:56:23.766078 | orchestrator | Wednesday 11 June 2025 14:52:06 +0000 (0:00:00.791) 0:06:53.474 ******** 2025-06-11 14:56:23.766084 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.766090 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.766096 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.766102 | orchestrator | 2025-06-11 14:56:23.766108 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-11 14:56:23.766114 | orchestrator | Wednesday 11 June 2025 14:52:06 +0000 (0:00:00.319) 0:06:53.794 ******** 2025-06-11 14:56:23.766121 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.766127 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.766133 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.766139 | orchestrator | 2025-06-11 14:56:23.766145 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-11 14:56:23.766155 | orchestrator | Wednesday 11 June 2025 14:52:06 +0000 (0:00:00.297) 0:06:54.092 ******** 2025-06-11 14:56:23.766162 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.766168 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.766174 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.766180 | orchestrator | 2025-06-11 14:56:23.766186 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-11 14:56:23.766192 | orchestrator | Wednesday 11 June 2025 14:52:07 +0000 (0:00:00.853) 0:06:54.945 ******** 2025-06-11 14:56:23.766198 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.766204 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.766210 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.766216 | orchestrator | 2025-06-11 14:56:23.766222 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-11 14:56:23.766228 | orchestrator | Wednesday 11 June 2025 14:52:08 +0000 (0:00:00.482) 0:06:55.428 ******** 2025-06-11 14:56:23.766234 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-11 14:56:23.766240 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-11 14:56:23.766246 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-11 14:56:23.766253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-11 14:56:23.766259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-11 14:56:23.766265 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-11 14:56:23.766286 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-11 14:56:23.766297 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-11 14:56:23.766303 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-11 14:56:23.766309 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-11 14:56:23.766315 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-11 14:56:23.766321 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-11 14:56:23.766327 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-11 14:56:23.766333 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-11 14:56:23.766339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-11 14:56:23.766345 | orchestrator | 2025-06-11 14:56:23.766351 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-11 14:56:23.766358 | orchestrator | Wednesday 11 June 2025 14:52:11 +0000 (0:00:03.382) 0:06:58.810 ******** 2025-06-11 14:56:23.766364 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.766370 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.766376 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.766382 | orchestrator | 2025-06-11 14:56:23.766388 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-11 14:56:23.766394 | orchestrator | Wednesday 11 June 2025 14:52:12 +0000 (0:00:00.350) 0:06:59.161 ******** 2025-06-11 14:56:23.766400 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.766406 | orchestrator | 2025-06-11 14:56:23.766412 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-11 14:56:23.766418 | orchestrator | Wednesday 11 June 2025 14:52:12 +0000 (0:00:00.989) 0:07:00.151 ******** 2025-06-11 14:56:23.766424 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-11 14:56:23.766430 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-11 14:56:23.766441 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-11 14:56:23.766447 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-11 14:56:23.766453 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-11 14:56:23.766459 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-11 14:56:23.766466 | orchestrator | 2025-06-11 14:56:23.766472 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-11 14:56:23.766478 | orchestrator | Wednesday 11 June 2025 14:52:14 +0000 (0:00:01.042) 0:07:01.193 ******** 2025-06-11 14:56:23.766484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.766490 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.766496 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.766502 | orchestrator | 2025-06-11 14:56:23.766508 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-11 14:56:23.766514 | orchestrator | Wednesday 11 June 2025 14:52:16 +0000 (0:00:02.128) 0:07:03.322 ******** 2025-06-11 14:56:23.766521 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 14:56:23.766527 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.766533 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.766539 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 14:56:23.766545 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-11 14:56:23.766551 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.766557 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 14:56:23.766563 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-11 14:56:23.766569 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.766575 | orchestrator | 2025-06-11 14:56:23.766581 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-11 14:56:23.766587 | orchestrator | Wednesday 11 June 2025 14:52:17 +0000 (0:00:01.162) 0:07:04.485 ******** 2025-06-11 14:56:23.766593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.766599 | orchestrator | 2025-06-11 14:56:23.766605 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-11 14:56:23.766611 | orchestrator | Wednesday 11 June 2025 14:52:20 +0000 (0:00:02.713) 0:07:07.198 ******** 2025-06-11 14:56:23.766617 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.766624 | orchestrator | 2025-06-11 14:56:23.766630 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-11 14:56:23.766636 | orchestrator | Wednesday 11 June 2025 14:52:20 +0000 (0:00:00.573) 0:07:07.771 ******** 2025-06-11 14:56:23.766642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d502667e-47a1-548a-a5f2-2993142d2957', 'data_vg': 'ceph-d502667e-47a1-548a-a5f2-2993142d2957'}) 2025-06-11 14:56:23.766649 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af7ee71e-f6e2-506a-9b19-157b61fbf28d', 'data_vg': 'ceph-af7ee71e-f6e2-506a-9b19-157b61fbf28d'}) 2025-06-11 14:56:23.766655 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-28682609-b410-5575-84cb-1d408b8d4d4a', 'data_vg': 'ceph-28682609-b410-5575-84cb-1d408b8d4d4a'}) 2025-06-11 14:56:23.766665 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee9e3135-eac7-54c9-a7bd-c984355157b1', 'data_vg': 'ceph-ee9e3135-eac7-54c9-a7bd-c984355157b1'}) 2025-06-11 14:56:23.766675 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-40a0a619-d38c-5879-89ae-a3eefd65fa41', 'data_vg': 'ceph-40a0a619-d38c-5879-89ae-a3eefd65fa41'}) 2025-06-11 14:56:23.766681 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6a3d2e7-9824-554b-8cae-981831ed9e32', 'data_vg': 'ceph-b6a3d2e7-9824-554b-8cae-981831ed9e32'}) 2025-06-11 14:56:23.766688 | orchestrator | 2025-06-11 14:56:23.766694 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-11 14:56:23.766704 | orchestrator | Wednesday 11 June 2025 14:53:04 +0000 (0:00:44.287) 0:07:52.059 ******** 2025-06-11 14:56:23.766710 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.766716 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.766722 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.766728 | orchestrator | 2025-06-11 14:56:23.766735 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-11 14:56:23.766745 | orchestrator | Wednesday 11 June 2025 14:53:05 +0000 (0:00:00.519) 0:07:52.579 ******** 2025-06-11 14:56:23.766755 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.766766 | orchestrator | 2025-06-11 14:56:23.766777 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-11 14:56:23.766788 | orchestrator | Wednesday 11 June 2025 14:53:05 +0000 (0:00:00.498) 0:07:53.078 ******** 2025-06-11 14:56:23.766799 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.766809 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.766815 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.766821 | orchestrator | 2025-06-11 14:56:23.766828 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-11 14:56:23.766834 | orchestrator | Wednesday 11 June 2025 14:53:06 +0000 (0:00:00.630) 0:07:53.708 ******** 2025-06-11 14:56:23.766840 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.766846 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.766852 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.766858 | orchestrator | 2025-06-11 14:56:23.766864 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-11 14:56:23.766870 | orchestrator | Wednesday 11 June 2025 14:53:09 +0000 (0:00:02.908) 0:07:56.617 ******** 2025-06-11 14:56:23.766876 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.766882 | orchestrator | 2025-06-11 14:56:23.766888 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-11 14:56:23.766894 | orchestrator | Wednesday 11 June 2025 14:53:10 +0000 (0:00:00.563) 0:07:57.181 ******** 2025-06-11 14:56:23.766900 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.766906 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.766912 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.766918 | orchestrator | 2025-06-11 14:56:23.766924 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-11 14:56:23.766930 | orchestrator | Wednesday 11 June 2025 14:53:11 +0000 (0:00:01.173) 0:07:58.354 ******** 2025-06-11 14:56:23.766937 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.766943 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.766949 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.766955 | orchestrator | 2025-06-11 14:56:23.766961 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-11 14:56:23.766967 | orchestrator | Wednesday 11 June 2025 14:53:12 +0000 (0:00:01.654) 0:08:00.009 ******** 2025-06-11 14:56:23.766973 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.766979 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.766985 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.766991 | orchestrator | 2025-06-11 14:56:23.766997 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-11 14:56:23.767003 | orchestrator | Wednesday 11 June 2025 14:53:14 +0000 (0:00:01.876) 0:08:01.885 ******** 2025-06-11 14:56:23.767010 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767016 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767022 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767028 | orchestrator | 2025-06-11 14:56:23.767034 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-11 14:56:23.767040 | orchestrator | Wednesday 11 June 2025 14:53:15 +0000 (0:00:00.333) 0:08:02.219 ******** 2025-06-11 14:56:23.767046 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767057 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767063 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767069 | orchestrator | 2025-06-11 14:56:23.767076 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-11 14:56:23.767082 | orchestrator | Wednesday 11 June 2025 14:53:15 +0000 (0:00:00.313) 0:08:02.532 ******** 2025-06-11 14:56:23.767088 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-11 14:56:23.767094 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-11 14:56:23.767100 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-11 14:56:23.767106 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-11 14:56:23.767112 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-11 14:56:23.767118 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-11 14:56:23.767124 | orchestrator | 2025-06-11 14:56:23.767130 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-11 14:56:23.767137 | orchestrator | Wednesday 11 June 2025 14:53:16 +0000 (0:00:01.319) 0:08:03.851 ******** 2025-06-11 14:56:23.767143 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-11 14:56:23.767149 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-11 14:56:23.767155 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-11 14:56:23.767161 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-11 14:56:23.767167 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-11 14:56:23.767173 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-11 14:56:23.767179 | orchestrator | 2025-06-11 14:56:23.767190 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-11 14:56:23.767197 | orchestrator | Wednesday 11 June 2025 14:53:18 +0000 (0:00:02.143) 0:08:05.994 ******** 2025-06-11 14:56:23.767207 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-11 14:56:23.767213 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-11 14:56:23.767219 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-11 14:56:23.767225 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-11 14:56:23.767231 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-11 14:56:23.767237 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-11 14:56:23.767243 | orchestrator | 2025-06-11 14:56:23.767249 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-11 14:56:23.767255 | orchestrator | Wednesday 11 June 2025 14:53:22 +0000 (0:00:03.493) 0:08:09.488 ******** 2025-06-11 14:56:23.767261 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767267 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767314 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.767325 | orchestrator | 2025-06-11 14:56:23.767335 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-11 14:56:23.767345 | orchestrator | Wednesday 11 June 2025 14:53:25 +0000 (0:00:03.195) 0:08:12.684 ******** 2025-06-11 14:56:23.767351 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767357 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767363 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-11 14:56:23.767369 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.767375 | orchestrator | 2025-06-11 14:56:23.767381 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-11 14:56:23.767387 | orchestrator | Wednesday 11 June 2025 14:53:38 +0000 (0:00:13.017) 0:08:25.702 ******** 2025-06-11 14:56:23.767393 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767399 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767405 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767411 | orchestrator | 2025-06-11 14:56:23.767417 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.767423 | orchestrator | Wednesday 11 June 2025 14:53:39 +0000 (0:00:00.892) 0:08:26.594 ******** 2025-06-11 14:56:23.767436 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767442 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767448 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767454 | orchestrator | 2025-06-11 14:56:23.767460 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-11 14:56:23.767466 | orchestrator | Wednesday 11 June 2025 14:53:40 +0000 (0:00:00.629) 0:08:27.223 ******** 2025-06-11 14:56:23.767472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.767479 | orchestrator | 2025-06-11 14:56:23.767485 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-11 14:56:23.767491 | orchestrator | Wednesday 11 June 2025 14:53:40 +0000 (0:00:00.567) 0:08:27.791 ******** 2025-06-11 14:56:23.767497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.767503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.767509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.767515 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767521 | orchestrator | 2025-06-11 14:56:23.767527 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-11 14:56:23.767533 | orchestrator | Wednesday 11 June 2025 14:53:41 +0000 (0:00:00.376) 0:08:28.167 ******** 2025-06-11 14:56:23.767539 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767545 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767551 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767557 | orchestrator | 2025-06-11 14:56:23.767563 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-11 14:56:23.767569 | orchestrator | Wednesday 11 June 2025 14:53:41 +0000 (0:00:00.315) 0:08:28.483 ******** 2025-06-11 14:56:23.767575 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767582 | orchestrator | 2025-06-11 14:56:23.767588 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-11 14:56:23.767594 | orchestrator | Wednesday 11 June 2025 14:53:41 +0000 (0:00:00.273) 0:08:28.757 ******** 2025-06-11 14:56:23.767600 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767606 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767612 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767618 | orchestrator | 2025-06-11 14:56:23.767624 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-11 14:56:23.767630 | orchestrator | Wednesday 11 June 2025 14:53:42 +0000 (0:00:00.515) 0:08:29.273 ******** 2025-06-11 14:56:23.767636 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767642 | orchestrator | 2025-06-11 14:56:23.767648 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-11 14:56:23.767654 | orchestrator | Wednesday 11 June 2025 14:53:42 +0000 (0:00:00.259) 0:08:29.532 ******** 2025-06-11 14:56:23.767660 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767666 | orchestrator | 2025-06-11 14:56:23.767672 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-11 14:56:23.767678 | orchestrator | Wednesday 11 June 2025 14:53:42 +0000 (0:00:00.217) 0:08:29.749 ******** 2025-06-11 14:56:23.767684 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767690 | orchestrator | 2025-06-11 14:56:23.767696 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-11 14:56:23.767702 | orchestrator | Wednesday 11 June 2025 14:53:42 +0000 (0:00:00.127) 0:08:29.877 ******** 2025-06-11 14:56:23.767708 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767714 | orchestrator | 2025-06-11 14:56:23.767720 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-11 14:56:23.767726 | orchestrator | Wednesday 11 June 2025 14:53:42 +0000 (0:00:00.241) 0:08:30.118 ******** 2025-06-11 14:56:23.767737 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767743 | orchestrator | 2025-06-11 14:56:23.767749 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-11 14:56:23.767764 | orchestrator | Wednesday 11 June 2025 14:53:43 +0000 (0:00:00.220) 0:08:30.339 ******** 2025-06-11 14:56:23.767770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.767776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.767782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.767788 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767794 | orchestrator | 2025-06-11 14:56:23.767800 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-11 14:56:23.767806 | orchestrator | Wednesday 11 June 2025 14:53:43 +0000 (0:00:00.396) 0:08:30.735 ******** 2025-06-11 14:56:23.767812 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767818 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767824 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767830 | orchestrator | 2025-06-11 14:56:23.767837 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-11 14:56:23.767842 | orchestrator | Wednesday 11 June 2025 14:53:43 +0000 (0:00:00.304) 0:08:31.039 ******** 2025-06-11 14:56:23.767847 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767853 | orchestrator | 2025-06-11 14:56:23.767858 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-11 14:56:23.767863 | orchestrator | Wednesday 11 June 2025 14:53:44 +0000 (0:00:00.750) 0:08:31.790 ******** 2025-06-11 14:56:23.767868 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767874 | orchestrator | 2025-06-11 14:56:23.767879 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-11 14:56:23.767884 | orchestrator | 2025-06-11 14:56:23.767890 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.767895 | orchestrator | Wednesday 11 June 2025 14:53:45 +0000 (0:00:00.643) 0:08:32.434 ******** 2025-06-11 14:56:23.767900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.767906 | orchestrator | 2025-06-11 14:56:23.767911 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.767917 | orchestrator | Wednesday 11 June 2025 14:53:46 +0000 (0:00:01.193) 0:08:33.627 ******** 2025-06-11 14:56:23.767922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.767927 | orchestrator | 2025-06-11 14:56:23.767933 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.767938 | orchestrator | Wednesday 11 June 2025 14:53:47 +0000 (0:00:01.196) 0:08:34.824 ******** 2025-06-11 14:56:23.767944 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.767949 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.767954 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.767960 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.767965 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.767970 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.767976 | orchestrator | 2025-06-11 14:56:23.767981 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.767986 | orchestrator | Wednesday 11 June 2025 14:53:48 +0000 (0:00:01.205) 0:08:36.029 ******** 2025-06-11 14:56:23.767992 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.767997 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768003 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768008 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768013 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768018 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768024 | orchestrator | 2025-06-11 14:56:23.768029 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.768034 | orchestrator | Wednesday 11 June 2025 14:53:49 +0000 (0:00:00.733) 0:08:36.762 ******** 2025-06-11 14:56:23.768045 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768051 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768056 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768061 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768067 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768072 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768077 | orchestrator | 2025-06-11 14:56:23.768083 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.768088 | orchestrator | Wednesday 11 June 2025 14:53:50 +0000 (0:00:00.991) 0:08:37.754 ******** 2025-06-11 14:56:23.768093 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768098 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768104 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768109 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768114 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768119 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768124 | orchestrator | 2025-06-11 14:56:23.768130 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.768135 | orchestrator | Wednesday 11 June 2025 14:53:51 +0000 (0:00:00.867) 0:08:38.621 ******** 2025-06-11 14:56:23.768140 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768146 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768151 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768156 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768161 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768167 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768172 | orchestrator | 2025-06-11 14:56:23.768178 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.768183 | orchestrator | Wednesday 11 June 2025 14:53:52 +0000 (0:00:01.194) 0:08:39.815 ******** 2025-06-11 14:56:23.768188 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768194 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768199 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768204 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768209 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768218 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768223 | orchestrator | 2025-06-11 14:56:23.768229 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.768238 | orchestrator | Wednesday 11 June 2025 14:53:53 +0000 (0:00:00.613) 0:08:40.429 ******** 2025-06-11 14:56:23.768243 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768249 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768254 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768259 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768264 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768270 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768290 | orchestrator | 2025-06-11 14:56:23.768296 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.768301 | orchestrator | Wednesday 11 June 2025 14:53:54 +0000 (0:00:00.818) 0:08:41.248 ******** 2025-06-11 14:56:23.768306 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768312 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768317 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768322 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768328 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768333 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768338 | orchestrator | 2025-06-11 14:56:23.768344 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.768349 | orchestrator | Wednesday 11 June 2025 14:53:55 +0000 (0:00:01.063) 0:08:42.311 ******** 2025-06-11 14:56:23.768354 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768359 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768365 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768377 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768382 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768387 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768392 | orchestrator | 2025-06-11 14:56:23.768398 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.768403 | orchestrator | Wednesday 11 June 2025 14:53:56 +0000 (0:00:01.244) 0:08:43.556 ******** 2025-06-11 14:56:23.768409 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768414 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768420 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768425 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768431 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768436 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768441 | orchestrator | 2025-06-11 14:56:23.768446 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.768452 | orchestrator | Wednesday 11 June 2025 14:53:56 +0000 (0:00:00.594) 0:08:44.150 ******** 2025-06-11 14:56:23.768457 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768462 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768467 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768473 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768478 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768483 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768489 | orchestrator | 2025-06-11 14:56:23.768494 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.768500 | orchestrator | Wednesday 11 June 2025 14:53:57 +0000 (0:00:00.832) 0:08:44.983 ******** 2025-06-11 14:56:23.768505 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768510 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768516 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768521 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768526 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768532 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768537 | orchestrator | 2025-06-11 14:56:23.768542 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.768548 | orchestrator | Wednesday 11 June 2025 14:53:58 +0000 (0:00:00.625) 0:08:45.608 ******** 2025-06-11 14:56:23.768553 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768558 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768564 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768569 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768574 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768580 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768585 | orchestrator | 2025-06-11 14:56:23.768590 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.768595 | orchestrator | Wednesday 11 June 2025 14:53:59 +0000 (0:00:00.744) 0:08:46.352 ******** 2025-06-11 14:56:23.768601 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768606 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768611 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768617 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768622 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768627 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768633 | orchestrator | 2025-06-11 14:56:23.768638 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.768643 | orchestrator | Wednesday 11 June 2025 14:53:59 +0000 (0:00:00.609) 0:08:46.961 ******** 2025-06-11 14:56:23.768649 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768654 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768659 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768664 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768670 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768675 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768680 | orchestrator | 2025-06-11 14:56:23.768685 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.768695 | orchestrator | Wednesday 11 June 2025 14:54:00 +0000 (0:00:00.757) 0:08:47.719 ******** 2025-06-11 14:56:23.768700 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768706 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768711 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768716 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:56:23.768722 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:56:23.768727 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:56:23.768732 | orchestrator | 2025-06-11 14:56:23.768738 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.768743 | orchestrator | Wednesday 11 June 2025 14:54:01 +0000 (0:00:00.599) 0:08:48.319 ******** 2025-06-11 14:56:23.768748 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.768753 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.768759 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.768768 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768773 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768779 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768784 | orchestrator | 2025-06-11 14:56:23.768790 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.768795 | orchestrator | Wednesday 11 June 2025 14:54:02 +0000 (0:00:00.879) 0:08:49.198 ******** 2025-06-11 14:56:23.768800 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768806 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768811 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768817 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768822 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768827 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768832 | orchestrator | 2025-06-11 14:56:23.768837 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.768843 | orchestrator | Wednesday 11 June 2025 14:54:02 +0000 (0:00:00.665) 0:08:49.864 ******** 2025-06-11 14:56:23.768848 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.768854 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.768859 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.768864 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768869 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.768875 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.768880 | orchestrator | 2025-06-11 14:56:23.768885 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-11 14:56:23.768891 | orchestrator | Wednesday 11 June 2025 14:54:03 +0000 (0:00:01.183) 0:08:51.047 ******** 2025-06-11 14:56:23.768896 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.768902 | orchestrator | 2025-06-11 14:56:23.768907 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-11 14:56:23.768912 | orchestrator | Wednesday 11 June 2025 14:54:07 +0000 (0:00:04.003) 0:08:55.050 ******** 2025-06-11 14:56:23.768918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.768923 | orchestrator | 2025-06-11 14:56:23.768928 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-11 14:56:23.768934 | orchestrator | Wednesday 11 June 2025 14:54:10 +0000 (0:00:02.154) 0:08:57.205 ******** 2025-06-11 14:56:23.768939 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.768944 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.768950 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.768955 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.768960 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.768966 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.768971 | orchestrator | 2025-06-11 14:56:23.768991 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-11 14:56:23.768998 | orchestrator | Wednesday 11 June 2025 14:54:11 +0000 (0:00:01.764) 0:08:58.970 ******** 2025-06-11 14:56:23.769003 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.769046 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.769053 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.769058 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.769063 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.769069 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.769074 | orchestrator | 2025-06-11 14:56:23.769079 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-11 14:56:23.769085 | orchestrator | Wednesday 11 June 2025 14:54:12 +0000 (0:00:00.985) 0:08:59.955 ******** 2025-06-11 14:56:23.769090 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.769097 | orchestrator | 2025-06-11 14:56:23.769102 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-11 14:56:23.769107 | orchestrator | Wednesday 11 June 2025 14:54:14 +0000 (0:00:01.376) 0:09:01.331 ******** 2025-06-11 14:56:23.769113 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.769118 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.769123 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.769128 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.769134 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.769139 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.769144 | orchestrator | 2025-06-11 14:56:23.769149 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-11 14:56:23.769155 | orchestrator | Wednesday 11 June 2025 14:54:16 +0000 (0:00:01.915) 0:09:03.247 ******** 2025-06-11 14:56:23.769160 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.769165 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.769171 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.769176 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.769181 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.769186 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.769191 | orchestrator | 2025-06-11 14:56:23.769197 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-11 14:56:23.769202 | orchestrator | Wednesday 11 June 2025 14:54:19 +0000 (0:00:03.373) 0:09:06.620 ******** 2025-06-11 14:56:23.769208 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:56:23.769213 | orchestrator | 2025-06-11 14:56:23.769218 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-11 14:56:23.769224 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:01.114) 0:09:07.735 ******** 2025-06-11 14:56:23.769229 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769234 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769240 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769245 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.769250 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.769256 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.769261 | orchestrator | 2025-06-11 14:56:23.769266 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-11 14:56:23.769288 | orchestrator | Wednesday 11 June 2025 14:54:21 +0000 (0:00:00.678) 0:09:08.413 ******** 2025-06-11 14:56:23.769294 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.769300 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.769305 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.769316 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:56:23.769321 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:56:23.769327 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:56:23.769332 | orchestrator | 2025-06-11 14:56:23.769340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-11 14:56:23.769346 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:02.068) 0:09:10.482 ******** 2025-06-11 14:56:23.769351 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769356 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769366 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769372 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:56:23.769377 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:56:23.769382 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:56:23.769387 | orchestrator | 2025-06-11 14:56:23.769393 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-11 14:56:23.769398 | orchestrator | 2025-06-11 14:56:23.769403 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.769409 | orchestrator | Wednesday 11 June 2025 14:54:24 +0000 (0:00:00.913) 0:09:11.395 ******** 2025-06-11 14:56:23.769414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.769419 | orchestrator | 2025-06-11 14:56:23.769425 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.769430 | orchestrator | Wednesday 11 June 2025 14:54:24 +0000 (0:00:00.429) 0:09:11.825 ******** 2025-06-11 14:56:23.769436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.769441 | orchestrator | 2025-06-11 14:56:23.769446 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.769452 | orchestrator | Wednesday 11 June 2025 14:54:25 +0000 (0:00:00.598) 0:09:12.423 ******** 2025-06-11 14:56:23.769457 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769463 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769468 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769474 | orchestrator | 2025-06-11 14:56:23.769479 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.769484 | orchestrator | Wednesday 11 June 2025 14:54:25 +0000 (0:00:00.307) 0:09:12.730 ******** 2025-06-11 14:56:23.769489 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769495 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769500 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769505 | orchestrator | 2025-06-11 14:56:23.769511 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.769516 | orchestrator | Wednesday 11 June 2025 14:54:26 +0000 (0:00:00.691) 0:09:13.422 ******** 2025-06-11 14:56:23.769521 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769527 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769532 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769537 | orchestrator | 2025-06-11 14:56:23.769543 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.769548 | orchestrator | Wednesday 11 June 2025 14:54:27 +0000 (0:00:00.865) 0:09:14.288 ******** 2025-06-11 14:56:23.769553 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769558 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769564 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769569 | orchestrator | 2025-06-11 14:56:23.769574 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.769579 | orchestrator | Wednesday 11 June 2025 14:54:27 +0000 (0:00:00.792) 0:09:15.081 ******** 2025-06-11 14:56:23.769585 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769590 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769595 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769600 | orchestrator | 2025-06-11 14:56:23.769606 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.769611 | orchestrator | Wednesday 11 June 2025 14:54:28 +0000 (0:00:00.347) 0:09:15.428 ******** 2025-06-11 14:56:23.769616 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769621 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769626 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769632 | orchestrator | 2025-06-11 14:56:23.769637 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.769643 | orchestrator | Wednesday 11 June 2025 14:54:28 +0000 (0:00:00.299) 0:09:15.728 ******** 2025-06-11 14:56:23.769652 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769657 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769663 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769668 | orchestrator | 2025-06-11 14:56:23.769673 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.769678 | orchestrator | Wednesday 11 June 2025 14:54:29 +0000 (0:00:00.591) 0:09:16.319 ******** 2025-06-11 14:56:23.769684 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769689 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769694 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769699 | orchestrator | 2025-06-11 14:56:23.769705 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.769710 | orchestrator | Wednesday 11 June 2025 14:54:29 +0000 (0:00:00.707) 0:09:17.026 ******** 2025-06-11 14:56:23.769715 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769720 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769726 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769731 | orchestrator | 2025-06-11 14:56:23.769736 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.769741 | orchestrator | Wednesday 11 June 2025 14:54:30 +0000 (0:00:00.731) 0:09:17.758 ******** 2025-06-11 14:56:23.769747 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769752 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769757 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769762 | orchestrator | 2025-06-11 14:56:23.769768 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.769773 | orchestrator | Wednesday 11 June 2025 14:54:30 +0000 (0:00:00.303) 0:09:18.062 ******** 2025-06-11 14:56:23.769778 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769783 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769789 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769794 | orchestrator | 2025-06-11 14:56:23.769802 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.769811 | orchestrator | Wednesday 11 June 2025 14:54:31 +0000 (0:00:00.647) 0:09:18.709 ******** 2025-06-11 14:56:23.769817 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769822 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769828 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769833 | orchestrator | 2025-06-11 14:56:23.769838 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.769844 | orchestrator | Wednesday 11 June 2025 14:54:31 +0000 (0:00:00.417) 0:09:19.127 ******** 2025-06-11 14:56:23.769849 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769854 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769860 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769865 | orchestrator | 2025-06-11 14:56:23.769870 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.769876 | orchestrator | Wednesday 11 June 2025 14:54:32 +0000 (0:00:00.389) 0:09:19.516 ******** 2025-06-11 14:56:23.769881 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.769886 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.769891 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.769897 | orchestrator | 2025-06-11 14:56:23.769902 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.769907 | orchestrator | Wednesday 11 June 2025 14:54:32 +0000 (0:00:00.379) 0:09:19.895 ******** 2025-06-11 14:56:23.769913 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769918 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769923 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769929 | orchestrator | 2025-06-11 14:56:23.769934 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.769940 | orchestrator | Wednesday 11 June 2025 14:54:33 +0000 (0:00:00.596) 0:09:20.491 ******** 2025-06-11 14:56:23.769945 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769955 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769960 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769965 | orchestrator | 2025-06-11 14:56:23.769971 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.769976 | orchestrator | Wednesday 11 June 2025 14:54:33 +0000 (0:00:00.302) 0:09:20.794 ******** 2025-06-11 14:56:23.769981 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.769986 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.769992 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.769997 | orchestrator | 2025-06-11 14:56:23.770002 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.770008 | orchestrator | Wednesday 11 June 2025 14:54:33 +0000 (0:00:00.321) 0:09:21.116 ******** 2025-06-11 14:56:23.770043 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.770051 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.770056 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.770062 | orchestrator | 2025-06-11 14:56:23.770067 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.770073 | orchestrator | Wednesday 11 June 2025 14:54:34 +0000 (0:00:00.331) 0:09:21.447 ******** 2025-06-11 14:56:23.770078 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.770083 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.770089 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.770094 | orchestrator | 2025-06-11 14:56:23.770099 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-11 14:56:23.770105 | orchestrator | Wednesday 11 June 2025 14:54:35 +0000 (0:00:00.772) 0:09:22.220 ******** 2025-06-11 14:56:23.770110 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.770115 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.770120 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-11 14:56:23.770126 | orchestrator | 2025-06-11 14:56:23.770131 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-11 14:56:23.770136 | orchestrator | Wednesday 11 June 2025 14:54:35 +0000 (0:00:00.461) 0:09:22.682 ******** 2025-06-11 14:56:23.770142 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.770147 | orchestrator | 2025-06-11 14:56:23.770152 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-11 14:56:23.770158 | orchestrator | Wednesday 11 June 2025 14:54:37 +0000 (0:00:02.079) 0:09:24.761 ******** 2025-06-11 14:56:23.770164 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-11 14:56:23.770172 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.770177 | orchestrator | 2025-06-11 14:56:23.770182 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-11 14:56:23.770188 | orchestrator | Wednesday 11 June 2025 14:54:37 +0000 (0:00:00.208) 0:09:24.969 ******** 2025-06-11 14:56:23.770194 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:56:23.770205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:56:23.770210 | orchestrator | 2025-06-11 14:56:23.770216 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-11 14:56:23.770221 | orchestrator | Wednesday 11 June 2025 14:54:46 +0000 (0:00:08.332) 0:09:33.301 ******** 2025-06-11 14:56:23.770226 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 14:56:23.770236 | orchestrator | 2025-06-11 14:56:23.770245 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-11 14:56:23.770254 | orchestrator | Wednesday 11 June 2025 14:54:49 +0000 (0:00:03.620) 0:09:36.922 ******** 2025-06-11 14:56:23.770260 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770265 | orchestrator | 2025-06-11 14:56:23.770286 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-11 14:56:23.770292 | orchestrator | Wednesday 11 June 2025 14:54:50 +0000 (0:00:00.786) 0:09:37.709 ******** 2025-06-11 14:56:23.770298 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-11 14:56:23.770303 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-11 14:56:23.770308 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-11 14:56:23.770313 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-11 14:56:23.770319 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-11 14:56:23.770324 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-11 14:56:23.770329 | orchestrator | 2025-06-11 14:56:23.770335 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-11 14:56:23.770340 | orchestrator | Wednesday 11 June 2025 14:54:51 +0000 (0:00:01.110) 0:09:38.820 ******** 2025-06-11 14:56:23.770345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.770351 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.770356 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.770361 | orchestrator | 2025-06-11 14:56:23.770366 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-11 14:56:23.770372 | orchestrator | Wednesday 11 June 2025 14:54:54 +0000 (0:00:02.639) 0:09:41.459 ******** 2025-06-11 14:56:23.770377 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 14:56:23.770383 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.770389 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770394 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 14:56:23.770399 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-11 14:56:23.770404 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770410 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 14:56:23.770415 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-11 14:56:23.770420 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770426 | orchestrator | 2025-06-11 14:56:23.770431 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-11 14:56:23.770436 | orchestrator | Wednesday 11 June 2025 14:54:55 +0000 (0:00:01.461) 0:09:42.920 ******** 2025-06-11 14:56:23.770442 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770447 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770452 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770458 | orchestrator | 2025-06-11 14:56:23.770463 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-11 14:56:23.770468 | orchestrator | Wednesday 11 June 2025 14:54:58 +0000 (0:00:02.542) 0:09:45.463 ******** 2025-06-11 14:56:23.770474 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.770479 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.770484 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.770489 | orchestrator | 2025-06-11 14:56:23.770495 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-11 14:56:23.770500 | orchestrator | Wednesday 11 June 2025 14:54:58 +0000 (0:00:00.321) 0:09:45.784 ******** 2025-06-11 14:56:23.770505 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770511 | orchestrator | 2025-06-11 14:56:23.770520 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-11 14:56:23.770526 | orchestrator | Wednesday 11 June 2025 14:54:59 +0000 (0:00:00.747) 0:09:46.532 ******** 2025-06-11 14:56:23.770531 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770536 | orchestrator | 2025-06-11 14:56:23.770541 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-11 14:56:23.770547 | orchestrator | Wednesday 11 June 2025 14:54:59 +0000 (0:00:00.602) 0:09:47.134 ******** 2025-06-11 14:56:23.770552 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770557 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770562 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770568 | orchestrator | 2025-06-11 14:56:23.770573 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-11 14:56:23.770578 | orchestrator | Wednesday 11 June 2025 14:55:01 +0000 (0:00:01.260) 0:09:48.395 ******** 2025-06-11 14:56:23.770583 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770589 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770594 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770599 | orchestrator | 2025-06-11 14:56:23.770605 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-11 14:56:23.770610 | orchestrator | Wednesday 11 June 2025 14:55:02 +0000 (0:00:01.391) 0:09:49.787 ******** 2025-06-11 14:56:23.770615 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770621 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770626 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770631 | orchestrator | 2025-06-11 14:56:23.770637 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-11 14:56:23.770642 | orchestrator | Wednesday 11 June 2025 14:55:04 +0000 (0:00:02.093) 0:09:51.881 ******** 2025-06-11 14:56:23.770647 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770653 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770658 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770663 | orchestrator | 2025-06-11 14:56:23.770672 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-11 14:56:23.770678 | orchestrator | Wednesday 11 June 2025 14:55:06 +0000 (0:00:02.055) 0:09:53.937 ******** 2025-06-11 14:56:23.770686 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.770691 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.770697 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.770702 | orchestrator | 2025-06-11 14:56:23.770707 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.770713 | orchestrator | Wednesday 11 June 2025 14:55:08 +0000 (0:00:01.535) 0:09:55.472 ******** 2025-06-11 14:56:23.770718 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770723 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770729 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770734 | orchestrator | 2025-06-11 14:56:23.770739 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-11 14:56:23.770745 | orchestrator | Wednesday 11 June 2025 14:55:08 +0000 (0:00:00.668) 0:09:56.141 ******** 2025-06-11 14:56:23.770750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770755 | orchestrator | 2025-06-11 14:56:23.770761 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-11 14:56:23.770766 | orchestrator | Wednesday 11 June 2025 14:55:09 +0000 (0:00:00.731) 0:09:56.872 ******** 2025-06-11 14:56:23.770771 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.770777 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.770782 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.770787 | orchestrator | 2025-06-11 14:56:23.770793 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-11 14:56:23.770798 | orchestrator | Wednesday 11 June 2025 14:55:10 +0000 (0:00:00.321) 0:09:57.193 ******** 2025-06-11 14:56:23.770809 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.770815 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.770820 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.770825 | orchestrator | 2025-06-11 14:56:23.770830 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-11 14:56:23.770836 | orchestrator | Wednesday 11 June 2025 14:55:11 +0000 (0:00:01.223) 0:09:58.417 ******** 2025-06-11 14:56:23.770841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.770846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.770852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.770857 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.770862 | orchestrator | 2025-06-11 14:56:23.770868 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-11 14:56:23.770873 | orchestrator | Wednesday 11 June 2025 14:55:12 +0000 (0:00:00.819) 0:09:59.236 ******** 2025-06-11 14:56:23.770878 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.770883 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.770889 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.770894 | orchestrator | 2025-06-11 14:56:23.770899 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-11 14:56:23.770905 | orchestrator | 2025-06-11 14:56:23.770910 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-11 14:56:23.770915 | orchestrator | Wednesday 11 June 2025 14:55:12 +0000 (0:00:00.819) 0:10:00.056 ******** 2025-06-11 14:56:23.770921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770926 | orchestrator | 2025-06-11 14:56:23.770931 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-11 14:56:23.770937 | orchestrator | Wednesday 11 June 2025 14:55:13 +0000 (0:00:00.531) 0:10:00.587 ******** 2025-06-11 14:56:23.770942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.770947 | orchestrator | 2025-06-11 14:56:23.770952 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-11 14:56:23.770958 | orchestrator | Wednesday 11 June 2025 14:55:14 +0000 (0:00:00.714) 0:10:01.302 ******** 2025-06-11 14:56:23.770963 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.770968 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.770973 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.770979 | orchestrator | 2025-06-11 14:56:23.770984 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-11 14:56:23.770989 | orchestrator | Wednesday 11 June 2025 14:55:14 +0000 (0:00:00.299) 0:10:01.601 ******** 2025-06-11 14:56:23.770995 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771000 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771005 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771010 | orchestrator | 2025-06-11 14:56:23.771016 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-11 14:56:23.771021 | orchestrator | Wednesday 11 June 2025 14:55:15 +0000 (0:00:00.717) 0:10:02.319 ******** 2025-06-11 14:56:23.771026 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771031 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771037 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771042 | orchestrator | 2025-06-11 14:56:23.771047 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-11 14:56:23.771053 | orchestrator | Wednesday 11 June 2025 14:55:15 +0000 (0:00:00.690) 0:10:03.009 ******** 2025-06-11 14:56:23.771058 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771063 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771068 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771074 | orchestrator | 2025-06-11 14:56:23.771079 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-11 14:56:23.771088 | orchestrator | Wednesday 11 June 2025 14:55:16 +0000 (0:00:01.054) 0:10:04.064 ******** 2025-06-11 14:56:23.771094 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771099 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771104 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771109 | orchestrator | 2025-06-11 14:56:23.771115 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-11 14:56:23.771123 | orchestrator | Wednesday 11 June 2025 14:55:17 +0000 (0:00:00.322) 0:10:04.386 ******** 2025-06-11 14:56:23.771129 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771134 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771142 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771148 | orchestrator | 2025-06-11 14:56:23.771153 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-11 14:56:23.771158 | orchestrator | Wednesday 11 June 2025 14:55:17 +0000 (0:00:00.308) 0:10:04.695 ******** 2025-06-11 14:56:23.771164 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771169 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771174 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771179 | orchestrator | 2025-06-11 14:56:23.771185 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-11 14:56:23.771190 | orchestrator | Wednesday 11 June 2025 14:55:17 +0000 (0:00:00.313) 0:10:05.008 ******** 2025-06-11 14:56:23.771195 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771201 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771206 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771211 | orchestrator | 2025-06-11 14:56:23.771217 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-11 14:56:23.771222 | orchestrator | Wednesday 11 June 2025 14:55:18 +0000 (0:00:01.075) 0:10:06.084 ******** 2025-06-11 14:56:23.771227 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771232 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771238 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771243 | orchestrator | 2025-06-11 14:56:23.771248 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-11 14:56:23.771254 | orchestrator | Wednesday 11 June 2025 14:55:19 +0000 (0:00:00.748) 0:10:06.833 ******** 2025-06-11 14:56:23.771259 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771264 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771270 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771311 | orchestrator | 2025-06-11 14:56:23.771317 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-11 14:56:23.771322 | orchestrator | Wednesday 11 June 2025 14:55:19 +0000 (0:00:00.307) 0:10:07.140 ******** 2025-06-11 14:56:23.771328 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771333 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771338 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771343 | orchestrator | 2025-06-11 14:56:23.771349 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-11 14:56:23.771354 | orchestrator | Wednesday 11 June 2025 14:55:20 +0000 (0:00:00.316) 0:10:07.457 ******** 2025-06-11 14:56:23.771359 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771364 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771370 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771375 | orchestrator | 2025-06-11 14:56:23.771380 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-11 14:56:23.771385 | orchestrator | Wednesday 11 June 2025 14:55:20 +0000 (0:00:00.617) 0:10:08.074 ******** 2025-06-11 14:56:23.771391 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771396 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771401 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771407 | orchestrator | 2025-06-11 14:56:23.771416 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-11 14:56:23.771426 | orchestrator | Wednesday 11 June 2025 14:55:21 +0000 (0:00:00.368) 0:10:08.442 ******** 2025-06-11 14:56:23.771442 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771451 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771458 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771464 | orchestrator | 2025-06-11 14:56:23.771469 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-11 14:56:23.771475 | orchestrator | Wednesday 11 June 2025 14:55:21 +0000 (0:00:00.323) 0:10:08.766 ******** 2025-06-11 14:56:23.771480 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771486 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771491 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771496 | orchestrator | 2025-06-11 14:56:23.771501 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-11 14:56:23.771507 | orchestrator | Wednesday 11 June 2025 14:55:21 +0000 (0:00:00.321) 0:10:09.087 ******** 2025-06-11 14:56:23.771512 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771517 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771523 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771528 | orchestrator | 2025-06-11 14:56:23.771533 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-11 14:56:23.771538 | orchestrator | Wednesday 11 June 2025 14:55:22 +0000 (0:00:00.571) 0:10:09.659 ******** 2025-06-11 14:56:23.771544 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771549 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771554 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771559 | orchestrator | 2025-06-11 14:56:23.771565 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-11 14:56:23.771570 | orchestrator | Wednesday 11 June 2025 14:55:22 +0000 (0:00:00.309) 0:10:09.969 ******** 2025-06-11 14:56:23.771575 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771581 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771586 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771591 | orchestrator | 2025-06-11 14:56:23.771596 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-11 14:56:23.771602 | orchestrator | Wednesday 11 June 2025 14:55:23 +0000 (0:00:00.318) 0:10:10.287 ******** 2025-06-11 14:56:23.771607 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.771612 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.771618 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.771623 | orchestrator | 2025-06-11 14:56:23.771628 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-11 14:56:23.771634 | orchestrator | Wednesday 11 June 2025 14:55:23 +0000 (0:00:00.770) 0:10:11.057 ******** 2025-06-11 14:56:23.771639 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.771644 | orchestrator | 2025-06-11 14:56:23.771650 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-11 14:56:23.771655 | orchestrator | Wednesday 11 June 2025 14:55:24 +0000 (0:00:00.527) 0:10:11.585 ******** 2025-06-11 14:56:23.771665 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771670 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.771679 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.771685 | orchestrator | 2025-06-11 14:56:23.771690 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-11 14:56:23.771695 | orchestrator | Wednesday 11 June 2025 14:55:26 +0000 (0:00:02.207) 0:10:13.792 ******** 2025-06-11 14:56:23.771701 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 14:56:23.771706 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-11 14:56:23.771711 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 14:56:23.771717 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.771722 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-11 14:56:23.771727 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.771733 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 14:56:23.771742 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-11 14:56:23.771748 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.771753 | orchestrator | 2025-06-11 14:56:23.771758 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-11 14:56:23.771763 | orchestrator | Wednesday 11 June 2025 14:55:27 +0000 (0:00:01.347) 0:10:15.140 ******** 2025-06-11 14:56:23.771767 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.771772 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.771776 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.771781 | orchestrator | 2025-06-11 14:56:23.771786 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-11 14:56:23.771790 | orchestrator | Wednesday 11 June 2025 14:55:28 +0000 (0:00:00.288) 0:10:15.428 ******** 2025-06-11 14:56:23.771795 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.771800 | orchestrator | 2025-06-11 14:56:23.771805 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-11 14:56:23.771809 | orchestrator | Wednesday 11 June 2025 14:55:28 +0000 (0:00:00.394) 0:10:15.823 ******** 2025-06-11 14:56:23.771814 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.771819 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.771824 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.771829 | orchestrator | 2025-06-11 14:56:23.771833 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-11 14:56:23.771838 | orchestrator | Wednesday 11 June 2025 14:55:29 +0000 (0:00:00.975) 0:10:16.799 ******** 2025-06-11 14:56:23.771843 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771847 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-11 14:56:23.771852 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771857 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-11 14:56:23.771861 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771866 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-11 14:56:23.771871 | orchestrator | 2025-06-11 14:56:23.771876 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-11 14:56:23.771880 | orchestrator | Wednesday 11 June 2025 14:55:33 +0000 (0:00:04.175) 0:10:20.974 ******** 2025-06-11 14:56:23.771885 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771890 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.771895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771899 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.771904 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:56:23.771909 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:56:23.771913 | orchestrator | 2025-06-11 14:56:23.771918 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-11 14:56:23.771923 | orchestrator | Wednesday 11 June 2025 14:55:36 +0000 (0:00:02.277) 0:10:23.252 ******** 2025-06-11 14:56:23.771931 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 14:56:23.771936 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.771940 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 14:56:23.771945 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.771950 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 14:56:23.771954 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.771959 | orchestrator | 2025-06-11 14:56:23.771964 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-11 14:56:23.771969 | orchestrator | Wednesday 11 June 2025 14:55:37 +0000 (0:00:01.216) 0:10:24.469 ******** 2025-06-11 14:56:23.771976 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-11 14:56:23.771981 | orchestrator | 2025-06-11 14:56:23.771985 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-11 14:56:23.771993 | orchestrator | Wednesday 11 June 2025 14:55:37 +0000 (0:00:00.208) 0:10:24.677 ******** 2025-06-11 14:56:23.771998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772022 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772026 | orchestrator | 2025-06-11 14:56:23.772031 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-11 14:56:23.772036 | orchestrator | Wednesday 11 June 2025 14:55:38 +0000 (0:00:01.094) 0:10:25.772 ******** 2025-06-11 14:56:23.772040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-11 14:56:23.772064 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772069 | orchestrator | 2025-06-11 14:56:23.772074 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-11 14:56:23.772078 | orchestrator | Wednesday 11 June 2025 14:55:39 +0000 (0:00:00.646) 0:10:26.418 ******** 2025-06-11 14:56:23.772083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-11 14:56:23.772088 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-11 14:56:23.772093 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-11 14:56:23.772097 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-11 14:56:23.772105 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-11 14:56:23.772110 | orchestrator | 2025-06-11 14:56:23.772115 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-11 14:56:23.772120 | orchestrator | Wednesday 11 June 2025 14:56:09 +0000 (0:00:30.177) 0:10:56.596 ******** 2025-06-11 14:56:23.772124 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772129 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.772134 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.772138 | orchestrator | 2025-06-11 14:56:23.772143 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-11 14:56:23.772148 | orchestrator | Wednesday 11 June 2025 14:56:09 +0000 (0:00:00.348) 0:10:56.944 ******** 2025-06-11 14:56:23.772153 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772157 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.772162 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.772166 | orchestrator | 2025-06-11 14:56:23.772171 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-11 14:56:23.772176 | orchestrator | Wednesday 11 June 2025 14:56:10 +0000 (0:00:00.327) 0:10:57.272 ******** 2025-06-11 14:56:23.772181 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.772185 | orchestrator | 2025-06-11 14:56:23.772190 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-11 14:56:23.772195 | orchestrator | Wednesday 11 June 2025 14:56:10 +0000 (0:00:00.733) 0:10:58.006 ******** 2025-06-11 14:56:23.772199 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.772204 | orchestrator | 2025-06-11 14:56:23.772209 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-11 14:56:23.772213 | orchestrator | Wednesday 11 June 2025 14:56:11 +0000 (0:00:00.541) 0:10:58.547 ******** 2025-06-11 14:56:23.772221 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.772226 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.772231 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.772235 | orchestrator | 2025-06-11 14:56:23.772243 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-11 14:56:23.772247 | orchestrator | Wednesday 11 June 2025 14:56:12 +0000 (0:00:01.283) 0:10:59.831 ******** 2025-06-11 14:56:23.772252 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.772257 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.772261 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.772266 | orchestrator | 2025-06-11 14:56:23.772283 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-11 14:56:23.772289 | orchestrator | Wednesday 11 June 2025 14:56:14 +0000 (0:00:01.413) 0:11:01.244 ******** 2025-06-11 14:56:23.772295 | orchestrator | changed: [testbed-node-3] 2025-06-11 14:56:23.772299 | orchestrator | changed: [testbed-node-4] 2025-06-11 14:56:23.772304 | orchestrator | changed: [testbed-node-5] 2025-06-11 14:56:23.772309 | orchestrator | 2025-06-11 14:56:23.772314 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-11 14:56:23.772318 | orchestrator | Wednesday 11 June 2025 14:56:16 +0000 (0:00:01.935) 0:11:03.180 ******** 2025-06-11 14:56:23.772323 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.772328 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.772333 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-11 14:56:23.772337 | orchestrator | 2025-06-11 14:56:23.772346 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-11 14:56:23.772351 | orchestrator | Wednesday 11 June 2025 14:56:18 +0000 (0:00:02.598) 0:11:05.778 ******** 2025-06-11 14:56:23.772355 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772360 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.772365 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.772372 | orchestrator | 2025-06-11 14:56:23.772380 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-11 14:56:23.772388 | orchestrator | Wednesday 11 June 2025 14:56:18 +0000 (0:00:00.352) 0:11:06.131 ******** 2025-06-11 14:56:23.772397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:56:23.772402 | orchestrator | 2025-06-11 14:56:23.772407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-11 14:56:23.772411 | orchestrator | Wednesday 11 June 2025 14:56:19 +0000 (0:00:00.520) 0:11:06.651 ******** 2025-06-11 14:56:23.772416 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.772421 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.772426 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.772430 | orchestrator | 2025-06-11 14:56:23.772435 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-11 14:56:23.772440 | orchestrator | Wednesday 11 June 2025 14:56:20 +0000 (0:00:00.584) 0:11:07.236 ******** 2025-06-11 14:56:23.772444 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772449 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:56:23.772454 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:56:23.772458 | orchestrator | 2025-06-11 14:56:23.772463 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-11 14:56:23.772468 | orchestrator | Wednesday 11 June 2025 14:56:20 +0000 (0:00:00.330) 0:11:07.566 ******** 2025-06-11 14:56:23.772473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:56:23.772477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:56:23.772482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:56:23.772487 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:56:23.772491 | orchestrator | 2025-06-11 14:56:23.772496 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-11 14:56:23.772501 | orchestrator | Wednesday 11 June 2025 14:56:21 +0000 (0:00:00.616) 0:11:08.182 ******** 2025-06-11 14:56:23.772506 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:56:23.772510 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:56:23.772515 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:56:23.772520 | orchestrator | 2025-06-11 14:56:23.772524 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:56:23.772529 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-11 14:56:23.772534 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-11 14:56:23.772539 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-11 14:56:23.772544 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-11 14:56:23.772548 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-11 14:56:23.772553 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-11 14:56:23.772558 | orchestrator | 2025-06-11 14:56:23.772563 | orchestrator | 2025-06-11 14:56:23.772572 | orchestrator | 2025-06-11 14:56:23.772580 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:56:23.772585 | orchestrator | Wednesday 11 June 2025 14:56:21 +0000 (0:00:00.277) 0:11:08.460 ******** 2025-06-11 14:56:23.772592 | orchestrator | =============================================================================== 2025-06-11 14:56:23.772597 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 73.74s 2025-06-11 14:56:23.772602 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.29s 2025-06-11 14:56:23.772607 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.24s 2025-06-11 14:56:23.772611 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.18s 2025-06-11 14:56:23.772616 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.84s 2025-06-11 14:56:23.772621 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.20s 2025-06-11 14:56:23.772625 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.02s 2025-06-11 14:56:23.772630 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.72s 2025-06-11 14:56:23.772635 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.92s 2025-06-11 14:56:23.772639 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.33s 2025-06-11 14:56:23.772644 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.73s 2025-06-11 14:56:23.772649 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.42s 2025-06-11 14:56:23.772653 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2025-06-11 14:56:23.772658 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.18s 2025-06-11 14:56:23.772663 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.00s 2025-06-11 14:56:23.772667 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.98s 2025-06-11 14:56:23.772672 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.62s 2025-06-11 14:56:23.772677 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.49s 2025-06-11 14:56:23.772681 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.38s 2025-06-11 14:56:23.772686 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.37s 2025-06-11 14:56:23.772691 | orchestrator | 2025-06-11 14:56:23 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:23.772695 | orchestrator | 2025-06-11 14:56:23 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:23.772700 | orchestrator | 2025-06-11 14:56:23 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:23.772705 | orchestrator | 2025-06-11 14:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:26.794403 | orchestrator | 2025-06-11 14:56:26 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:26.794735 | orchestrator | 2025-06-11 14:56:26 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:26.795689 | orchestrator | 2025-06-11 14:56:26 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:26.795953 | orchestrator | 2025-06-11 14:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:29.836073 | orchestrator | 2025-06-11 14:56:29 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:29.836875 | orchestrator | 2025-06-11 14:56:29 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:29.838426 | orchestrator | 2025-06-11 14:56:29 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:29.838542 | orchestrator | 2025-06-11 14:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:32.890906 | orchestrator | 2025-06-11 14:56:32 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:32.892989 | orchestrator | 2025-06-11 14:56:32 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:32.894830 | orchestrator | 2025-06-11 14:56:32 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:32.894871 | orchestrator | 2025-06-11 14:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:35.935477 | orchestrator | 2025-06-11 14:56:35 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:35.936561 | orchestrator | 2025-06-11 14:56:35 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:35.938587 | orchestrator | 2025-06-11 14:56:35 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:35.938635 | orchestrator | 2025-06-11 14:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:38.982868 | orchestrator | 2025-06-11 14:56:38 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:38.982983 | orchestrator | 2025-06-11 14:56:38 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:38.982992 | orchestrator | 2025-06-11 14:56:38 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:38.983310 | orchestrator | 2025-06-11 14:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:42.033610 | orchestrator | 2025-06-11 14:56:42 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:42.034235 | orchestrator | 2025-06-11 14:56:42 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:42.035175 | orchestrator | 2025-06-11 14:56:42 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:42.035207 | orchestrator | 2025-06-11 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:45.084907 | orchestrator | 2025-06-11 14:56:45 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:45.085015 | orchestrator | 2025-06-11 14:56:45 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:45.089375 | orchestrator | 2025-06-11 14:56:45 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:45.089407 | orchestrator | 2025-06-11 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:48.137686 | orchestrator | 2025-06-11 14:56:48 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:48.139050 | orchestrator | 2025-06-11 14:56:48 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:48.141139 | orchestrator | 2025-06-11 14:56:48 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:48.142061 | orchestrator | 2025-06-11 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:51.185504 | orchestrator | 2025-06-11 14:56:51 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:51.186219 | orchestrator | 2025-06-11 14:56:51 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:51.187745 | orchestrator | 2025-06-11 14:56:51 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:51.187793 | orchestrator | 2025-06-11 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:54.236485 | orchestrator | 2025-06-11 14:56:54 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:54.237890 | orchestrator | 2025-06-11 14:56:54 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:54.239940 | orchestrator | 2025-06-11 14:56:54 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:54.240393 | orchestrator | 2025-06-11 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:56:57.285493 | orchestrator | 2025-06-11 14:56:57 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:56:57.287848 | orchestrator | 2025-06-11 14:56:57 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:56:57.289039 | orchestrator | 2025-06-11 14:56:57 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:56:57.289180 | orchestrator | 2025-06-11 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:00.331923 | orchestrator | 2025-06-11 14:57:00 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:00.336046 | orchestrator | 2025-06-11 14:57:00 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:00.339135 | orchestrator | 2025-06-11 14:57:00 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:57:00.339185 | orchestrator | 2025-06-11 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:03.378122 | orchestrator | 2025-06-11 14:57:03 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:03.378513 | orchestrator | 2025-06-11 14:57:03 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:03.379535 | orchestrator | 2025-06-11 14:57:03 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:57:03.379567 | orchestrator | 2025-06-11 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:06.423173 | orchestrator | 2025-06-11 14:57:06 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:06.425080 | orchestrator | 2025-06-11 14:57:06 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:06.426556 | orchestrator | 2025-06-11 14:57:06 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:57:06.426596 | orchestrator | 2025-06-11 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:09.470241 | orchestrator | 2025-06-11 14:57:09 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:09.470938 | orchestrator | 2025-06-11 14:57:09 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:09.472679 | orchestrator | 2025-06-11 14:57:09 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state STARTED 2025-06-11 14:57:09.472707 | orchestrator | 2025-06-11 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:12.517470 | orchestrator | 2025-06-11 14:57:12 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:12.518278 | orchestrator | 2025-06-11 14:57:12 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:12.520881 | orchestrator | 2025-06-11 14:57:12 | INFO  | Task 07d4611f-0a05-4876-aef0-40133e1fd87d is in state SUCCESS 2025-06-11 14:57:12.521282 | orchestrator | 2025-06-11 14:57:12.522996 | orchestrator | 2025-06-11 14:57:12.523031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:57:12.523055 | orchestrator | 2025-06-11 14:57:12.523061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:57:12.523067 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:00.201) 0:00:00.201 ******** 2025-06-11 14:57:12.523073 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:12.523080 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:12.523085 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:12.523090 | orchestrator | 2025-06-11 14:57:12.523096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:57:12.523101 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:00.223) 0:00:00.425 ******** 2025-06-11 14:57:12.523107 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-11 14:57:12.523113 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-11 14:57:12.523118 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-11 14:57:12.523124 | orchestrator | 2025-06-11 14:57:12.523129 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-11 14:57:12.523134 | orchestrator | 2025-06-11 14:57:12.523140 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-11 14:57:12.523145 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:00.298) 0:00:00.723 ******** 2025-06-11 14:57:12.523151 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:12.523156 | orchestrator | 2025-06-11 14:57:12.523162 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-11 14:57:12.523167 | orchestrator | Wednesday 11 June 2025 14:54:21 +0000 (0:00:00.380) 0:00:01.103 ******** 2025-06-11 14:57:12.523172 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:57:12.523178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:57:12.523184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-11 14:57:12.523189 | orchestrator | 2025-06-11 14:57:12.523194 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-11 14:57:12.523200 | orchestrator | Wednesday 11 June 2025 14:54:21 +0000 (0:00:00.606) 0:00:01.710 ******** 2025-06-11 14:57:12.523208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523405 | orchestrator | 2025-06-11 14:57:12.523411 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-11 14:57:12.523417 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:01.533) 0:00:03.243 ******** 2025-06-11 14:57:12.523422 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:12.523427 | orchestrator | 2025-06-11 14:57:12.523433 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-11 14:57:12.523438 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:00.523) 0:00:03.767 ******** 2025-06-11 14:57:12.523451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523499 | orchestrator | 2025-06-11 14:57:12.523505 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-11 14:57:12.523510 | orchestrator | Wednesday 11 June 2025 14:54:26 +0000 (0:00:02.421) 0:00:06.188 ******** 2025-06-11 14:57:12.523516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523536 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:12.523542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523558 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:12.523564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523583 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:12.523588 | orchestrator | 2025-06-11 14:57:12.523594 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-11 14:57:12.523599 | orchestrator | Wednesday 11 June 2025 14:54:27 +0000 (0:00:01.384) 0:00:07.573 ******** 2025-06-11 14:57:12.523605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523639 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:12.523645 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:12.523653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-11 14:57:12.523665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-11 14:57:12.523671 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:12.523676 | orchestrator | 2025-06-11 14:57:12.523682 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-11 14:57:12.523687 | orchestrator | Wednesday 11 June 2025 14:54:28 +0000 (0:00:00.839) 0:00:08.412 ******** 2025-06-11 14:57:12.523693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523749 | orchestrator | 2025-06-11 14:57:12.523754 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-11 14:57:12.523770 | orchestrator | Wednesday 11 June 2025 14:54:31 +0000 (0:00:02.630) 0:00:11.043 ******** 2025-06-11 14:57:12.523776 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.523782 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:12.523787 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:12.523792 | orchestrator | 2025-06-11 14:57:12.523798 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-11 14:57:12.523803 | orchestrator | Wednesday 11 June 2025 14:54:34 +0000 (0:00:03.749) 0:00:14.792 ******** 2025-06-11 14:57:12.523811 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.523817 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:12.523822 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:12.523828 | orchestrator | 2025-06-11 14:57:12.523833 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-11 14:57:12.523838 | orchestrator | Wednesday 11 June 2025 14:54:36 +0000 (0:00:01.794) 0:00:16.587 ******** 2025-06-11 14:57:12.523844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-11 14:57:12.523870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-11 14:57:12.523895 | orchestrator | 2025-06-11 14:57:12.523900 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-11 14:57:12.523906 | orchestrator | Wednesday 11 June 2025 14:54:38 +0000 (0:00:01.889) 0:00:18.476 ******** 2025-06-11 14:57:12.523911 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:12.523917 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:12.523922 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:12.523927 | orchestrator | 2025-06-11 14:57:12.523933 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-11 14:57:12.523942 | orchestrator | Wednesday 11 June 2025 14:54:38 +0000 (0:00:00.274) 0:00:18.751 ******** 2025-06-11 14:57:12.523948 | orchestrator | 2025-06-11 14:57:12.523953 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-11 14:57:12.523959 | orchestrator | Wednesday 11 June 2025 14:54:38 +0000 (0:00:00.067) 0:00:18.819 ******** 2025-06-11 14:57:12.523964 | orchestrator | 2025-06-11 14:57:12.523969 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-11 14:57:12.523974 | orchestrator | Wednesday 11 June 2025 14:54:38 +0000 (0:00:00.073) 0:00:18.892 ******** 2025-06-11 14:57:12.523980 | orchestrator | 2025-06-11 14:57:12.523985 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-11 14:57:12.523992 | orchestrator | Wednesday 11 June 2025 14:54:39 +0000 (0:00:00.265) 0:00:19.158 ******** 2025-06-11 14:57:12.523998 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:12.524004 | orchestrator | 2025-06-11 14:57:12.524010 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-11 14:57:12.524016 | orchestrator | Wednesday 11 June 2025 14:54:39 +0000 (0:00:00.188) 0:00:19.347 ******** 2025-06-11 14:57:12.524022 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:12.524028 | orchestrator | 2025-06-11 14:57:12.524034 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-11 14:57:12.524040 | orchestrator | Wednesday 11 June 2025 14:54:39 +0000 (0:00:00.239) 0:00:19.586 ******** 2025-06-11 14:57:12.524047 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.524052 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:12.524058 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:12.524064 | orchestrator | 2025-06-11 14:57:12.524070 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-11 14:57:12.524076 | orchestrator | Wednesday 11 June 2025 14:55:45 +0000 (0:01:06.227) 0:01:25.813 ******** 2025-06-11 14:57:12.524083 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.524088 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:12.524095 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:12.524100 | orchestrator | 2025-06-11 14:57:12.524106 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-11 14:57:12.524112 | orchestrator | Wednesday 11 June 2025 14:56:59 +0000 (0:01:13.491) 0:02:39.305 ******** 2025-06-11 14:57:12.524118 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:12.524124 | orchestrator | 2025-06-11 14:57:12.524130 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-11 14:57:12.524136 | orchestrator | Wednesday 11 June 2025 14:56:59 +0000 (0:00:00.568) 0:02:39.874 ******** 2025-06-11 14:57:12.524142 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:12.524148 | orchestrator | 2025-06-11 14:57:12.524154 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-11 14:57:12.524163 | orchestrator | Wednesday 11 June 2025 14:57:02 +0000 (0:00:02.394) 0:02:42.268 ******** 2025-06-11 14:57:12.524169 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:12.524175 | orchestrator | 2025-06-11 14:57:12.524181 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-11 14:57:12.524187 | orchestrator | Wednesday 11 June 2025 14:57:04 +0000 (0:00:02.226) 0:02:44.495 ******** 2025-06-11 14:57:12.524193 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.524199 | orchestrator | 2025-06-11 14:57:12.524205 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-11 14:57:12.524211 | orchestrator | Wednesday 11 June 2025 14:57:07 +0000 (0:00:02.742) 0:02:47.238 ******** 2025-06-11 14:57:12.524217 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:12.524223 | orchestrator | 2025-06-11 14:57:12.524229 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:57:12.524236 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 14:57:12.524273 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:57:12.524281 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 14:57:12.524287 | orchestrator | 2025-06-11 14:57:12.524293 | orchestrator | 2025-06-11 14:57:12.524299 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:57:12.524309 | orchestrator | Wednesday 11 June 2025 14:57:09 +0000 (0:00:02.521) 0:02:49.760 ******** 2025-06-11 14:57:12.524315 | orchestrator | =============================================================================== 2025-06-11 14:57:12.524322 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 73.49s 2025-06-11 14:57:12.524328 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.23s 2025-06-11 14:57:12.524334 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.75s 2025-06-11 14:57:12.524340 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2025-06-11 14:57:12.524346 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2025-06-11 14:57:12.524352 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.52s 2025-06-11 14:57:12.524357 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.42s 2025-06-11 14:57:12.524362 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.39s 2025-06-11 14:57:12.524367 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.23s 2025-06-11 14:57:12.524373 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.89s 2025-06-11 14:57:12.524378 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.79s 2025-06-11 14:57:12.524383 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.53s 2025-06-11 14:57:12.524389 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.38s 2025-06-11 14:57:12.524394 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.84s 2025-06-11 14:57:12.524399 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.61s 2025-06-11 14:57:12.524404 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-06-11 14:57:12.524410 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-06-11 14:57:12.524415 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.41s 2025-06-11 14:57:12.524420 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.38s 2025-06-11 14:57:12.524425 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2025-06-11 14:57:12.524431 | orchestrator | 2025-06-11 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:15.560141 | orchestrator | 2025-06-11 14:57:15 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:15.561991 | orchestrator | 2025-06-11 14:57:15 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:15.562097 | orchestrator | 2025-06-11 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:18.609175 | orchestrator | 2025-06-11 14:57:18 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:18.609521 | orchestrator | 2025-06-11 14:57:18 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:18.609717 | orchestrator | 2025-06-11 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:21.654828 | orchestrator | 2025-06-11 14:57:21 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:21.656203 | orchestrator | 2025-06-11 14:57:21 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state STARTED 2025-06-11 14:57:21.656757 | orchestrator | 2025-06-11 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:24.703360 | orchestrator | 2025-06-11 14:57:24 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:24.704587 | orchestrator | 2025-06-11 14:57:24 | INFO  | Task 7a9a66b6-b596-4537-bfb9-60527eb93b2b is in state SUCCESS 2025-06-11 14:57:24.704626 | orchestrator | 2025-06-11 14:57:24.706653 | orchestrator | 2025-06-11 14:57:24.706689 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-11 14:57:24.706702 | orchestrator | 2025-06-11 14:57:24.706713 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-11 14:57:24.706724 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:00.092) 0:00:00.092 ******** 2025-06-11 14:57:24.706735 | orchestrator | ok: [localhost] => { 2025-06-11 14:57:24.706748 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-11 14:57:24.706760 | orchestrator | } 2025-06-11 14:57:24.706771 | orchestrator | 2025-06-11 14:57:24.706782 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-11 14:57:24.706792 | orchestrator | Wednesday 11 June 2025 14:54:20 +0000 (0:00:00.038) 0:00:00.131 ******** 2025-06-11 14:57:24.706803 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-11 14:57:24.706816 | orchestrator | ...ignoring 2025-06-11 14:57:24.706827 | orchestrator | 2025-06-11 14:57:24.706837 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-11 14:57:24.706848 | orchestrator | Wednesday 11 June 2025 14:54:22 +0000 (0:00:02.727) 0:00:02.858 ******** 2025-06-11 14:57:24.706859 | orchestrator | skipping: [localhost] 2025-06-11 14:57:24.706869 | orchestrator | 2025-06-11 14:57:24.706880 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-11 14:57:24.706890 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:00.061) 0:00:02.920 ******** 2025-06-11 14:57:24.706901 | orchestrator | ok: [localhost] 2025-06-11 14:57:24.706911 | orchestrator | 2025-06-11 14:57:24.706922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:57:24.706933 | orchestrator | 2025-06-11 14:57:24.706943 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:57:24.706954 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:00.131) 0:00:03.051 ******** 2025-06-11 14:57:24.706964 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.706975 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.706985 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.706996 | orchestrator | 2025-06-11 14:57:24.707006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:57:24.707017 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:00.278) 0:00:03.330 ******** 2025-06-11 14:57:24.707028 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-11 14:57:24.707039 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-11 14:57:24.707050 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-11 14:57:24.707060 | orchestrator | 2025-06-11 14:57:24.707071 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-11 14:57:24.707082 | orchestrator | 2025-06-11 14:57:24.707092 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-11 14:57:24.707103 | orchestrator | Wednesday 11 June 2025 14:54:23 +0000 (0:00:00.557) 0:00:03.887 ******** 2025-06-11 14:57:24.707113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-11 14:57:24.707124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-11 14:57:24.707134 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-11 14:57:24.707166 | orchestrator | 2025-06-11 14:57:24.707178 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-11 14:57:24.707190 | orchestrator | Wednesday 11 June 2025 14:54:24 +0000 (0:00:00.361) 0:00:04.249 ******** 2025-06-11 14:57:24.707201 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:24.707212 | orchestrator | 2025-06-11 14:57:24.707223 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-11 14:57:24.707233 | orchestrator | Wednesday 11 June 2025 14:54:24 +0000 (0:00:00.507) 0:00:04.756 ******** 2025-06-11 14:57:24.707297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707362 | orchestrator | 2025-06-11 14:57:24.707381 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-11 14:57:24.707392 | orchestrator | Wednesday 11 June 2025 14:54:28 +0000 (0:00:03.160) 0:00:07.916 ******** 2025-06-11 14:57:24.707403 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.707415 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.707426 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.707436 | orchestrator | 2025-06-11 14:57:24.707447 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-11 14:57:24.707458 | orchestrator | Wednesday 11 June 2025 14:54:28 +0000 (0:00:00.733) 0:00:08.650 ******** 2025-06-11 14:57:24.707469 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.707479 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.707490 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.707500 | orchestrator | 2025-06-11 14:57:24.707511 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-11 14:57:24.707522 | orchestrator | Wednesday 11 June 2025 14:54:30 +0000 (0:00:01.569) 0:00:10.220 ******** 2025-06-11 14:57:24.707534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.707598 | orchestrator | 2025-06-11 14:57:24.707609 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-11 14:57:24.707620 | orchestrator | Wednesday 11 June 2025 14:54:34 +0000 (0:00:04.658) 0:00:14.878 ******** 2025-06-11 14:57:24.707630 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.707641 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.707652 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.707663 | orchestrator | 2025-06-11 14:57:24.707674 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-11 14:57:24.707685 | orchestrator | Wednesday 11 June 2025 14:54:36 +0000 (0:00:01.274) 0:00:16.153 ******** 2025-06-11 14:57:24.707695 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.707706 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:24.707717 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:24.707728 | orchestrator | 2025-06-11 14:57:24.707738 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-11 14:57:24.707749 | orchestrator | Wednesday 11 June 2025 14:54:40 +0000 (0:00:03.817) 0:00:19.971 ******** 2025-06-11 14:57:24.707760 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:24.707771 | orchestrator | 2025-06-11 14:57:24.707782 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-11 14:57:24.707792 | orchestrator | Wednesday 11 June 2025 14:54:40 +0000 (0:00:00.564) 0:00:20.535 ******** 2025-06-11 14:57:24.707818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.707832 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.707844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.707862 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.707886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.707899 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.707910 | orchestrator | 2025-06-11 14:57:24.707920 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-11 14:57:24.707931 | orchestrator | Wednesday 11 June 2025 14:54:43 +0000 (0:00:02.739) 0:00:23.275 ******** 2025-06-11 14:57:24.707943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.707961 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.707983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.707995 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.708031 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708042 | orchestrator | 2025-06-11 14:57:24.708052 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-11 14:57:24.708063 | orchestrator | Wednesday 11 June 2025 14:54:45 +0000 (0:00:02.007) 0:00:25.282 ******** 2025-06-11 14:57:24.708079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.708091 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.708129 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.708141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-11 14:57:24.708152 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708163 | orchestrator | 2025-06-11 14:57:24.708174 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-11 14:57:24.708185 | orchestrator | Wednesday 11 June 2025 14:54:47 +0000 (0:00:02.461) 0:00:27.744 ******** 2025-06-11 14:57:24.708210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.708230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.708272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-11 14:57:24.708293 | orchestrator | 2025-06-11 14:57:24.708304 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-11 14:57:24.708315 | orchestrator | Wednesday 11 June 2025 14:54:50 +0000 (0:00:02.903) 0:00:30.647 ******** 2025-06-11 14:57:24.708325 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:24.708336 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.708346 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:24.708357 | orchestrator | 2025-06-11 14:57:24.708367 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-11 14:57:24.708378 | orchestrator | Wednesday 11 June 2025 14:54:51 +0000 (0:00:01.090) 0:00:31.738 ******** 2025-06-11 14:57:24.708389 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.708400 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.708410 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.708421 | orchestrator | 2025-06-11 14:57:24.708431 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-11 14:57:24.708442 | orchestrator | Wednesday 11 June 2025 14:54:52 +0000 (0:00:00.356) 0:00:32.095 ******** 2025-06-11 14:57:24.708452 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.708463 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.708473 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.708484 | orchestrator | 2025-06-11 14:57:24.708495 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-11 14:57:24.708505 | orchestrator | Wednesday 11 June 2025 14:54:52 +0000 (0:00:00.327) 0:00:32.423 ******** 2025-06-11 14:57:24.708517 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-11 14:57:24.708528 | orchestrator | ...ignoring 2025-06-11 14:57:24.708539 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-11 14:57:24.708549 | orchestrator | ...ignoring 2025-06-11 14:57:24.708560 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-11 14:57:24.708571 | orchestrator | ...ignoring 2025-06-11 14:57:24.708581 | orchestrator | 2025-06-11 14:57:24.708592 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-11 14:57:24.708603 | orchestrator | Wednesday 11 June 2025 14:55:03 +0000 (0:00:10.797) 0:00:43.220 ******** 2025-06-11 14:57:24.708613 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.708624 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.708634 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.708644 | orchestrator | 2025-06-11 14:57:24.708655 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-11 14:57:24.708666 | orchestrator | Wednesday 11 June 2025 14:55:03 +0000 (0:00:00.660) 0:00:43.881 ******** 2025-06-11 14:57:24.708677 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.708694 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708704 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708715 | orchestrator | 2025-06-11 14:57:24.708725 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-11 14:57:24.708736 | orchestrator | Wednesday 11 June 2025 14:55:04 +0000 (0:00:00.459) 0:00:44.340 ******** 2025-06-11 14:57:24.708747 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.708757 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708768 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708778 | orchestrator | 2025-06-11 14:57:24.708789 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-11 14:57:24.708800 | orchestrator | Wednesday 11 June 2025 14:55:04 +0000 (0:00:00.424) 0:00:44.765 ******** 2025-06-11 14:57:24.708810 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.708821 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708831 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708842 | orchestrator | 2025-06-11 14:57:24.708852 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-11 14:57:24.708863 | orchestrator | Wednesday 11 June 2025 14:55:05 +0000 (0:00:00.415) 0:00:45.181 ******** 2025-06-11 14:57:24.708873 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.708884 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.708894 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.708905 | orchestrator | 2025-06-11 14:57:24.708920 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-11 14:57:24.708931 | orchestrator | Wednesday 11 June 2025 14:55:05 +0000 (0:00:00.621) 0:00:45.802 ******** 2025-06-11 14:57:24.708947 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.708959 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.708969 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.708980 | orchestrator | 2025-06-11 14:57:24.708991 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-11 14:57:24.709001 | orchestrator | Wednesday 11 June 2025 14:55:06 +0000 (0:00:00.430) 0:00:46.233 ******** 2025-06-11 14:57:24.709012 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.709023 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.709033 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-11 14:57:24.709044 | orchestrator | 2025-06-11 14:57:24.709054 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-11 14:57:24.709065 | orchestrator | Wednesday 11 June 2025 14:55:06 +0000 (0:00:00.381) 0:00:46.615 ******** 2025-06-11 14:57:24.709075 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.709086 | orchestrator | 2025-06-11 14:57:24.709096 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-11 14:57:24.709107 | orchestrator | Wednesday 11 June 2025 14:55:16 +0000 (0:00:10.002) 0:00:56.618 ******** 2025-06-11 14:57:24.709118 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.709128 | orchestrator | 2025-06-11 14:57:24.709139 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-11 14:57:24.709149 | orchestrator | Wednesday 11 June 2025 14:55:16 +0000 (0:00:00.129) 0:00:56.747 ******** 2025-06-11 14:57:24.709160 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.709170 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.709181 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.709191 | orchestrator | 2025-06-11 14:57:24.709202 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-11 14:57:24.709212 | orchestrator | Wednesday 11 June 2025 14:55:17 +0000 (0:00:01.043) 0:00:57.791 ******** 2025-06-11 14:57:24.709223 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.709234 | orchestrator | 2025-06-11 14:57:24.709286 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-11 14:57:24.709299 | orchestrator | Wednesday 11 June 2025 14:55:25 +0000 (0:00:07.987) 0:01:05.778 ******** 2025-06-11 14:57:24.709310 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.709328 | orchestrator | 2025-06-11 14:57:24.709338 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-11 14:57:24.709349 | orchestrator | Wednesday 11 June 2025 14:55:27 +0000 (0:00:01.537) 0:01:07.316 ******** 2025-06-11 14:57:24.709360 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.709370 | orchestrator | 2025-06-11 14:57:24.709381 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-11 14:57:24.709392 | orchestrator | Wednesday 11 June 2025 14:55:29 +0000 (0:00:02.271) 0:01:09.587 ******** 2025-06-11 14:57:24.709403 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.709413 | orchestrator | 2025-06-11 14:57:24.709424 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-11 14:57:24.709435 | orchestrator | Wednesday 11 June 2025 14:55:29 +0000 (0:00:00.107) 0:01:09.695 ******** 2025-06-11 14:57:24.709445 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.709456 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.709466 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.709477 | orchestrator | 2025-06-11 14:57:24.709487 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-11 14:57:24.709498 | orchestrator | Wednesday 11 June 2025 14:55:30 +0000 (0:00:00.405) 0:01:10.101 ******** 2025-06-11 14:57:24.709508 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.709519 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-11 14:57:24.709530 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:24.709540 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:24.709551 | orchestrator | 2025-06-11 14:57:24.709561 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-11 14:57:24.709572 | orchestrator | skipping: no hosts matched 2025-06-11 14:57:24.709583 | orchestrator | 2025-06-11 14:57:24.709593 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-11 14:57:24.709604 | orchestrator | 2025-06-11 14:57:24.709614 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-11 14:57:24.709625 | orchestrator | Wednesday 11 June 2025 14:55:30 +0000 (0:00:00.303) 0:01:10.405 ******** 2025-06-11 14:57:24.709636 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:57:24.709646 | orchestrator | 2025-06-11 14:57:24.709657 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-11 14:57:24.709667 | orchestrator | Wednesday 11 June 2025 14:55:52 +0000 (0:00:22.389) 0:01:32.795 ******** 2025-06-11 14:57:24.709678 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.709689 | orchestrator | 2025-06-11 14:57:24.709699 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-11 14:57:24.709710 | orchestrator | Wednesday 11 June 2025 14:56:08 +0000 (0:00:15.583) 0:01:48.379 ******** 2025-06-11 14:57:24.709721 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.709731 | orchestrator | 2025-06-11 14:57:24.709742 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-11 14:57:24.709753 | orchestrator | 2025-06-11 14:57:24.709763 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-11 14:57:24.709774 | orchestrator | Wednesday 11 June 2025 14:56:10 +0000 (0:00:02.451) 0:01:50.830 ******** 2025-06-11 14:57:24.709785 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:57:24.709795 | orchestrator | 2025-06-11 14:57:24.709806 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-11 14:57:24.709816 | orchestrator | Wednesday 11 June 2025 14:56:31 +0000 (0:00:20.087) 0:02:10.917 ******** 2025-06-11 14:57:24.709827 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.709837 | orchestrator | 2025-06-11 14:57:24.709848 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-11 14:57:24.709864 | orchestrator | Wednesday 11 June 2025 14:56:51 +0000 (0:00:20.588) 0:02:31.506 ******** 2025-06-11 14:57:24.709875 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.709886 | orchestrator | 2025-06-11 14:57:24.709896 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-11 14:57:24.709911 | orchestrator | 2025-06-11 14:57:24.709928 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-11 14:57:24.709940 | orchestrator | Wednesday 11 June 2025 14:56:54 +0000 (0:00:02.649) 0:02:34.155 ******** 2025-06-11 14:57:24.709950 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.709961 | orchestrator | 2025-06-11 14:57:24.709971 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-11 14:57:24.709982 | orchestrator | Wednesday 11 June 2025 14:57:08 +0000 (0:00:14.747) 0:02:48.903 ******** 2025-06-11 14:57:24.709992 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.710003 | orchestrator | 2025-06-11 14:57:24.710014 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-11 14:57:24.710063 | orchestrator | Wednesday 11 June 2025 14:57:09 +0000 (0:00:00.521) 0:02:49.424 ******** 2025-06-11 14:57:24.710075 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.710085 | orchestrator | 2025-06-11 14:57:24.710096 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-11 14:57:24.710107 | orchestrator | 2025-06-11 14:57:24.710118 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-11 14:57:24.710128 | orchestrator | Wednesday 11 June 2025 14:57:11 +0000 (0:00:02.371) 0:02:51.796 ******** 2025-06-11 14:57:24.710139 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:57:24.710150 | orchestrator | 2025-06-11 14:57:24.710160 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-11 14:57:24.710171 | orchestrator | Wednesday 11 June 2025 14:57:12 +0000 (0:00:00.604) 0:02:52.400 ******** 2025-06-11 14:57:24.710182 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.710192 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.710203 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.710214 | orchestrator | 2025-06-11 14:57:24.710224 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-11 14:57:24.710235 | orchestrator | Wednesday 11 June 2025 14:57:14 +0000 (0:00:02.370) 0:02:54.770 ******** 2025-06-11 14:57:24.710263 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.710275 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.710286 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.710296 | orchestrator | 2025-06-11 14:57:24.710307 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-11 14:57:24.710318 | orchestrator | Wednesday 11 June 2025 14:57:17 +0000 (0:00:02.145) 0:02:56.916 ******** 2025-06-11 14:57:24.710328 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.710339 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.710350 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.710360 | orchestrator | 2025-06-11 14:57:24.710371 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-11 14:57:24.710382 | orchestrator | Wednesday 11 June 2025 14:57:19 +0000 (0:00:02.032) 0:02:58.948 ******** 2025-06-11 14:57:24.710392 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.710403 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.710414 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:57:24.710424 | orchestrator | 2025-06-11 14:57:24.710435 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-11 14:57:24.710446 | orchestrator | Wednesday 11 June 2025 14:57:21 +0000 (0:00:02.037) 0:03:00.986 ******** 2025-06-11 14:57:24.710457 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:57:24.710467 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:57:24.710478 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:57:24.710489 | orchestrator | 2025-06-11 14:57:24.710500 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-11 14:57:24.710510 | orchestrator | Wednesday 11 June 2025 14:57:23 +0000 (0:00:02.874) 0:03:03.860 ******** 2025-06-11 14:57:24.710521 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:57:24.710539 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:57:24.710549 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:57:24.710560 | orchestrator | 2025-06-11 14:57:24.710571 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:57:24.710582 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-11 14:57:24.710593 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-11 14:57:24.710605 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-11 14:57:24.710616 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-11 14:57:24.710627 | orchestrator | 2025-06-11 14:57:24.710638 | orchestrator | 2025-06-11 14:57:24.710648 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:57:24.710659 | orchestrator | Wednesday 11 June 2025 14:57:24 +0000 (0:00:00.229) 0:03:04.090 ******** 2025-06-11 14:57:24.710670 | orchestrator | =============================================================================== 2025-06-11 14:57:24.710681 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.48s 2025-06-11 14:57:24.710691 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.17s 2025-06-11 14:57:24.710702 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.75s 2025-06-11 14:57:24.710713 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2025-06-11 14:57:24.710729 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.00s 2025-06-11 14:57:24.710740 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.99s 2025-06-11 14:57:24.710757 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.10s 2025-06-11 14:57:24.710768 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.66s 2025-06-11 14:57:24.710779 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.82s 2025-06-11 14:57:24.710789 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.16s 2025-06-11 14:57:24.710800 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.90s 2025-06-11 14:57:24.710811 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.87s 2025-06-11 14:57:24.710822 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.74s 2025-06-11 14:57:24.710832 | orchestrator | Check MariaDB service --------------------------------------------------- 2.73s 2025-06-11 14:57:24.710843 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.46s 2025-06-11 14:57:24.710854 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.37s 2025-06-11 14:57:24.710864 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2025-06-11 14:57:24.710875 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.27s 2025-06-11 14:57:24.710886 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.15s 2025-06-11 14:57:24.710896 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.04s 2025-06-11 14:57:24.710908 | orchestrator | 2025-06-11 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:27.764646 | orchestrator | 2025-06-11 14:57:27 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:27.765734 | orchestrator | 2025-06-11 14:57:27 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:27.767659 | orchestrator | 2025-06-11 14:57:27 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:27.767760 | orchestrator | 2025-06-11 14:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:30.813014 | orchestrator | 2025-06-11 14:57:30 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:30.813116 | orchestrator | 2025-06-11 14:57:30 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:30.815110 | orchestrator | 2025-06-11 14:57:30 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:30.815751 | orchestrator | 2025-06-11 14:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:33.861984 | orchestrator | 2025-06-11 14:57:33 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:33.862302 | orchestrator | 2025-06-11 14:57:33 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:33.862925 | orchestrator | 2025-06-11 14:57:33 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:33.863021 | orchestrator | 2025-06-11 14:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:36.914357 | orchestrator | 2025-06-11 14:57:36 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:36.919794 | orchestrator | 2025-06-11 14:57:36 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:36.919833 | orchestrator | 2025-06-11 14:57:36 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:36.919846 | orchestrator | 2025-06-11 14:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:39.960086 | orchestrator | 2025-06-11 14:57:39 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:39.961795 | orchestrator | 2025-06-11 14:57:39 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:39.963436 | orchestrator | 2025-06-11 14:57:39 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:39.963461 | orchestrator | 2025-06-11 14:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:43.009135 | orchestrator | 2025-06-11 14:57:43 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:43.009284 | orchestrator | 2025-06-11 14:57:43 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:43.010090 | orchestrator | 2025-06-11 14:57:43 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:43.010133 | orchestrator | 2025-06-11 14:57:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:46.054690 | orchestrator | 2025-06-11 14:57:46 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:46.055524 | orchestrator | 2025-06-11 14:57:46 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:46.056517 | orchestrator | 2025-06-11 14:57:46 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:46.056570 | orchestrator | 2025-06-11 14:57:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:49.094940 | orchestrator | 2025-06-11 14:57:49 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:49.097035 | orchestrator | 2025-06-11 14:57:49 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:49.097321 | orchestrator | 2025-06-11 14:57:49 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:49.097362 | orchestrator | 2025-06-11 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:52.139862 | orchestrator | 2025-06-11 14:57:52 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:52.142145 | orchestrator | 2025-06-11 14:57:52 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:52.145126 | orchestrator | 2025-06-11 14:57:52 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:52.145166 | orchestrator | 2025-06-11 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:55.187968 | orchestrator | 2025-06-11 14:57:55 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:55.191015 | orchestrator | 2025-06-11 14:57:55 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:55.191067 | orchestrator | 2025-06-11 14:57:55 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:55.191078 | orchestrator | 2025-06-11 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:57:58.236012 | orchestrator | 2025-06-11 14:57:58 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:57:58.238112 | orchestrator | 2025-06-11 14:57:58 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:57:58.240152 | orchestrator | 2025-06-11 14:57:58 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:57:58.240842 | orchestrator | 2025-06-11 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:01.279492 | orchestrator | 2025-06-11 14:58:01 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:01.280068 | orchestrator | 2025-06-11 14:58:01 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:01.281224 | orchestrator | 2025-06-11 14:58:01 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:01.281283 | orchestrator | 2025-06-11 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:04.335324 | orchestrator | 2025-06-11 14:58:04 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:04.336390 | orchestrator | 2025-06-11 14:58:04 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:04.337719 | orchestrator | 2025-06-11 14:58:04 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:04.337753 | orchestrator | 2025-06-11 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:07.385418 | orchestrator | 2025-06-11 14:58:07 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:07.386296 | orchestrator | 2025-06-11 14:58:07 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:07.387539 | orchestrator | 2025-06-11 14:58:07 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:07.387573 | orchestrator | 2025-06-11 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:10.431516 | orchestrator | 2025-06-11 14:58:10 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:10.432587 | orchestrator | 2025-06-11 14:58:10 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:10.434829 | orchestrator | 2025-06-11 14:58:10 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:10.434861 | orchestrator | 2025-06-11 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:13.475865 | orchestrator | 2025-06-11 14:58:13 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:13.477596 | orchestrator | 2025-06-11 14:58:13 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:13.478927 | orchestrator | 2025-06-11 14:58:13 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:13.478967 | orchestrator | 2025-06-11 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:16.517665 | orchestrator | 2025-06-11 14:58:16 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:16.517780 | orchestrator | 2025-06-11 14:58:16 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:16.517795 | orchestrator | 2025-06-11 14:58:16 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:16.517808 | orchestrator | 2025-06-11 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:19.553262 | orchestrator | 2025-06-11 14:58:19 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:19.554454 | orchestrator | 2025-06-11 14:58:19 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:19.555058 | orchestrator | 2025-06-11 14:58:19 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:19.555531 | orchestrator | 2025-06-11 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:22.589410 | orchestrator | 2025-06-11 14:58:22 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:22.590279 | orchestrator | 2025-06-11 14:58:22 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:22.591523 | orchestrator | 2025-06-11 14:58:22 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:22.591547 | orchestrator | 2025-06-11 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:25.638844 | orchestrator | 2025-06-11 14:58:25 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:25.640996 | orchestrator | 2025-06-11 14:58:25 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:25.643869 | orchestrator | 2025-06-11 14:58:25 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:25.643924 | orchestrator | 2025-06-11 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:28.688331 | orchestrator | 2025-06-11 14:58:28 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:28.690907 | orchestrator | 2025-06-11 14:58:28 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:28.693342 | orchestrator | 2025-06-11 14:58:28 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:28.693780 | orchestrator | 2025-06-11 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:31.735474 | orchestrator | 2025-06-11 14:58:31 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:31.736721 | orchestrator | 2025-06-11 14:58:31 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state STARTED 2025-06-11 14:58:31.737827 | orchestrator | 2025-06-11 14:58:31 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:31.738197 | orchestrator | 2025-06-11 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:34.778387 | orchestrator | 2025-06-11 14:58:34 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:34.780851 | orchestrator | 2025-06-11 14:58:34 | INFO  | Task 8629fc7f-a017-42d7-b0b8-e144a9c23c36 is in state SUCCESS 2025-06-11 14:58:34.782772 | orchestrator | 2025-06-11 14:58:34.782808 | orchestrator | 2025-06-11 14:58:34.782820 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-11 14:58:34.782832 | orchestrator | 2025-06-11 14:58:34.782842 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-11 14:58:34.783026 | orchestrator | Wednesday 11 June 2025 14:56:26 +0000 (0:00:00.581) 0:00:00.581 ******** 2025-06-11 14:58:34.783832 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:58:34.783868 | orchestrator | 2025-06-11 14:58:34.783883 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-11 14:58:34.783894 | orchestrator | Wednesday 11 June 2025 14:56:26 +0000 (0:00:00.536) 0:00:01.118 ******** 2025-06-11 14:58:34.783905 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.783917 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.783927 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.783938 | orchestrator | 2025-06-11 14:58:34.783962 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-11 14:58:34.783973 | orchestrator | Wednesday 11 June 2025 14:56:27 +0000 (0:00:00.635) 0:00:01.753 ******** 2025-06-11 14:58:34.783984 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.783994 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784005 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784015 | orchestrator | 2025-06-11 14:58:34.784026 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-11 14:58:34.784037 | orchestrator | Wednesday 11 June 2025 14:56:27 +0000 (0:00:00.254) 0:00:02.007 ******** 2025-06-11 14:58:34.784048 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784058 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784069 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784079 | orchestrator | 2025-06-11 14:58:34.784090 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-11 14:58:34.784101 | orchestrator | Wednesday 11 June 2025 14:56:28 +0000 (0:00:00.730) 0:00:02.737 ******** 2025-06-11 14:58:34.784111 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784121 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784132 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784142 | orchestrator | 2025-06-11 14:58:34.784153 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-11 14:58:34.784250 | orchestrator | Wednesday 11 June 2025 14:56:28 +0000 (0:00:00.270) 0:00:03.008 ******** 2025-06-11 14:58:34.784266 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784277 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784288 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784298 | orchestrator | 2025-06-11 14:58:34.784309 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-11 14:58:34.784320 | orchestrator | Wednesday 11 June 2025 14:56:28 +0000 (0:00:00.246) 0:00:03.255 ******** 2025-06-11 14:58:34.784330 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784340 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784351 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784361 | orchestrator | 2025-06-11 14:58:34.784372 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-11 14:58:34.784382 | orchestrator | Wednesday 11 June 2025 14:56:29 +0000 (0:00:00.277) 0:00:03.532 ******** 2025-06-11 14:58:34.784393 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.784405 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.784415 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.784426 | orchestrator | 2025-06-11 14:58:34.784437 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-11 14:58:34.784449 | orchestrator | Wednesday 11 June 2025 14:56:29 +0000 (0:00:00.380) 0:00:03.913 ******** 2025-06-11 14:58:34.784476 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784489 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784529 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784541 | orchestrator | 2025-06-11 14:58:34.784554 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-11 14:58:34.784566 | orchestrator | Wednesday 11 June 2025 14:56:29 +0000 (0:00:00.311) 0:00:04.224 ******** 2025-06-11 14:58:34.784579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:58:34.784591 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:58:34.784603 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:58:34.784614 | orchestrator | 2025-06-11 14:58:34.784627 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-11 14:58:34.784639 | orchestrator | Wednesday 11 June 2025 14:56:30 +0000 (0:00:00.605) 0:00:04.830 ******** 2025-06-11 14:58:34.784651 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.784663 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.784675 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.784687 | orchestrator | 2025-06-11 14:58:34.784699 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-11 14:58:34.784712 | orchestrator | Wednesday 11 June 2025 14:56:30 +0000 (0:00:00.405) 0:00:05.236 ******** 2025-06-11 14:58:34.784723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:58:34.784735 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:58:34.784747 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:58:34.784759 | orchestrator | 2025-06-11 14:58:34.784772 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-11 14:58:34.784784 | orchestrator | Wednesday 11 June 2025 14:56:33 +0000 (0:00:02.144) 0:00:07.381 ******** 2025-06-11 14:58:34.784796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-11 14:58:34.784807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-11 14:58:34.784817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-11 14:58:34.784828 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.784838 | orchestrator | 2025-06-11 14:58:34.784849 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-11 14:58:34.784911 | orchestrator | Wednesday 11 June 2025 14:56:33 +0000 (0:00:00.395) 0:00:07.776 ******** 2025-06-11 14:58:34.784927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.784940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.784958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.784970 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.784980 | orchestrator | 2025-06-11 14:58:34.784991 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-11 14:58:34.785002 | orchestrator | Wednesday 11 June 2025 14:56:34 +0000 (0:00:00.768) 0:00:08.545 ******** 2025-06-11 14:58:34.785014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.785035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.785046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.785057 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785068 | orchestrator | 2025-06-11 14:58:34.785078 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-11 14:58:34.785089 | orchestrator | Wednesday 11 June 2025 14:56:34 +0000 (0:00:00.151) 0:00:08.696 ******** 2025-06-11 14:58:34.785101 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '74b478c5def2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-11 14:56:31.537503', 'end': '2025-06-11 14:56:31.592596', 'delta': '0:00:00.055093', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['74b478c5def2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-11 14:58:34.785115 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '60e7022a5e06', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-11 14:56:32.289057', 'end': '2025-06-11 14:56:32.330754', 'delta': '0:00:00.041697', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['60e7022a5e06'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-11 14:58:34.785161 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9ac2a718e124', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-11 14:56:32.813011', 'end': '2025-06-11 14:56:32.865124', 'delta': '0:00:00.052113', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9ac2a718e124'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-11 14:58:34.785175 | orchestrator | 2025-06-11 14:58:34.785190 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-11 14:58:34.785201 | orchestrator | Wednesday 11 June 2025 14:56:34 +0000 (0:00:00.353) 0:00:09.050 ******** 2025-06-11 14:58:34.785212 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.785243 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.785273 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.785293 | orchestrator | 2025-06-11 14:58:34.785312 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-11 14:58:34.785324 | orchestrator | Wednesday 11 June 2025 14:56:35 +0000 (0:00:00.433) 0:00:09.483 ******** 2025-06-11 14:58:34.785335 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-11 14:58:34.785346 | orchestrator | 2025-06-11 14:58:34.785356 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-11 14:58:34.785367 | orchestrator | Wednesday 11 June 2025 14:56:36 +0000 (0:00:01.677) 0:00:11.161 ******** 2025-06-11 14:58:34.785377 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785388 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785398 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785409 | orchestrator | 2025-06-11 14:58:34.785419 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-11 14:58:34.785430 | orchestrator | Wednesday 11 June 2025 14:56:37 +0000 (0:00:00.305) 0:00:11.466 ******** 2025-06-11 14:58:34.785440 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785450 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785461 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785471 | orchestrator | 2025-06-11 14:58:34.785482 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-11 14:58:34.785492 | orchestrator | Wednesday 11 June 2025 14:56:37 +0000 (0:00:00.390) 0:00:11.856 ******** 2025-06-11 14:58:34.785503 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785513 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785524 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785535 | orchestrator | 2025-06-11 14:58:34.785545 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-11 14:58:34.785556 | orchestrator | Wednesday 11 June 2025 14:56:37 +0000 (0:00:00.432) 0:00:12.289 ******** 2025-06-11 14:58:34.785566 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.785577 | orchestrator | 2025-06-11 14:58:34.785587 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-11 14:58:34.785598 | orchestrator | Wednesday 11 June 2025 14:56:38 +0000 (0:00:00.119) 0:00:12.409 ******** 2025-06-11 14:58:34.785608 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785618 | orchestrator | 2025-06-11 14:58:34.785629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-11 14:58:34.785639 | orchestrator | Wednesday 11 June 2025 14:56:38 +0000 (0:00:00.218) 0:00:12.627 ******** 2025-06-11 14:58:34.785650 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785668 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785685 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785705 | orchestrator | 2025-06-11 14:58:34.785723 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-11 14:58:34.785739 | orchestrator | Wednesday 11 June 2025 14:56:38 +0000 (0:00:00.282) 0:00:12.909 ******** 2025-06-11 14:58:34.785750 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785761 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785771 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785782 | orchestrator | 2025-06-11 14:58:34.785792 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-11 14:58:34.785803 | orchestrator | Wednesday 11 June 2025 14:56:38 +0000 (0:00:00.328) 0:00:13.238 ******** 2025-06-11 14:58:34.785813 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785824 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785834 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785844 | orchestrator | 2025-06-11 14:58:34.785855 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-11 14:58:34.785865 | orchestrator | Wednesday 11 June 2025 14:56:39 +0000 (0:00:00.455) 0:00:13.694 ******** 2025-06-11 14:58:34.785876 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785893 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785904 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785914 | orchestrator | 2025-06-11 14:58:34.785925 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-11 14:58:34.785935 | orchestrator | Wednesday 11 June 2025 14:56:39 +0000 (0:00:00.289) 0:00:13.984 ******** 2025-06-11 14:58:34.785946 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.785956 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.785967 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.785977 | orchestrator | 2025-06-11 14:58:34.785988 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-11 14:58:34.785998 | orchestrator | Wednesday 11 June 2025 14:56:39 +0000 (0:00:00.304) 0:00:14.288 ******** 2025-06-11 14:58:34.786008 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.786074 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.786094 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.786112 | orchestrator | 2025-06-11 14:58:34.786131 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-11 14:58:34.786193 | orchestrator | Wednesday 11 June 2025 14:56:40 +0000 (0:00:00.299) 0:00:14.588 ******** 2025-06-11 14:58:34.786206 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.786217 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.786255 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.786266 | orchestrator | 2025-06-11 14:58:34.786277 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-11 14:58:34.786288 | orchestrator | Wednesday 11 June 2025 14:56:40 +0000 (0:00:00.472) 0:00:15.060 ******** 2025-06-11 14:58:34.786307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a', 'dm-uuid-LVM-qVRyAxwlJvte8cTNXy3Q4ieDHHetj3deFYwX2dPbY3zKfgDtHZrzIE9r06eLkkYO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32', 'dm-uuid-LVM-9ctOp4BFEl0FojxVV506NxxMS68q2DXHMxe31gAQeSYsjeX7eOnl2h2wNXngqQ2x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957', 'dm-uuid-LVM-EbyCR13qjFTphmQN19BXO3d4n1cvwa4haVL98gcncL02tG3KA712BAlcE1qyAVah'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0x4c5h-mp39-nMoR-hRdC-1mio-j0O1-u14n29', 'scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299', 'scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41', 'dm-uuid-LVM-MdsAZtVH1G7DkfJmEQHVDEZxrg9oMpJP0d3ZOtz96FrlSOfd8B0hZQ1CkTL0r92D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjl1Hs-KGha-577H-PI94-OcVY-YPfK-kG6ndB', 'scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3', 'scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc', 'scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786865 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.786881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTUHR3-cU3L-NQI4-ePP2-iL5O-Ympv-XUs7Dw', 'scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86', 'scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH3T3i-Dn3M-XNKe-Lyl1-pcgd-aURa-0aARjI', 'scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9', 'scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b', 'scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.786933 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.786944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d', 'dm-uuid-LVM-OZhBBziM30Sv33izNUJCCpS1ZmIlNIDNGZMmdZdnb82chb3ij6QUzfbwJKZdIPA4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1', 'dm-uuid-LVM-kgQ11RSuUfOfaFhh0TgRjAWKWH7JHXuUvHaRAgQy1WMqaNvnF3uD6Jn1dgDVgtwG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.786990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-11 14:58:34.787088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.787108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EZ46eo-ukF5-k0SP-GANR-L15Q-lcyW-RFGZXD', 'scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b', 'scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.787119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ELZhyh-Homk-4KJX-dJ89-JbC9-K2tK-3FJ5f5', 'scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961', 'scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.787131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86', 'scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.787148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-11 14:58:34.787307 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.787326 | orchestrator | 2025-06-11 14:58:34.787339 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-11 14:58:34.787350 | orchestrator | Wednesday 11 June 2025 14:56:41 +0000 (0:00:00.593) 0:00:15.654 ******** 2025-06-11 14:58:34.787369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a', 'dm-uuid-LVM-qVRyAxwlJvte8cTNXy3Q4ieDHHetj3deFYwX2dPbY3zKfgDtHZrzIE9r06eLkkYO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32', 'dm-uuid-LVM-9ctOp4BFEl0FojxVV506NxxMS68q2DXHMxe31gAQeSYsjeX7eOnl2h2wNXngqQ2x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957', 'dm-uuid-LVM-EbyCR13qjFTphmQN19BXO3d4n1cvwa4haVL98gcncL02tG3KA712BAlcE1qyAVah'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41', 'dm-uuid-LVM-MdsAZtVH1G7DkfJmEQHVDEZxrg9oMpJP0d3ZOtz96FrlSOfd8B0hZQ1CkTL0r92D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0c481cc-e968-4619-84fd-240890fb97cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787621 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--28682609--b410--5575--84cb--1d408b8d4d4a-osd--block--28682609--b410--5575--84cb--1d408b8d4d4a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0x4c5h-mp39-nMoR-hRdC-1mio-j0O1-u14n29', 'scsi-0QEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299', 'scsi-SQEMU_QEMU_HARDDISK_997790a1-2284-4ae8-ae59-5b744e390299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b6a3d2e7--9824--554b--8cae--981831ed9e32-osd--block--b6a3d2e7--9824--554b--8cae--981831ed9e32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjl1Hs-KGha-577H-PI94-OcVY-YPfK-kG6ndB', 'scsi-0QEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3', 'scsi-SQEMU_QEMU_HARDDISK_1d2dd3c0-811b-40b4-99af-5946e13dbfd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc', 'scsi-SQEMU_QEMU_HARDDISK_98e4ef65-326b-406b-8d68-9bbb471a6ffc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787746 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.787761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16', 'scsi-SQEMU_QEMU_HARDDISK_c820f619-9360-49d1-97de-f4f9700f6b29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d502667e--47a1--548a--a5f2--2993142d2957-osd--block--d502667e--47a1--548a--a5f2--2993142d2957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTUHR3-cU3L-NQI4-ePP2-iL5O-Ympv-XUs7Dw', 'scsi-0QEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86', 'scsi-SQEMU_QEMU_HARDDISK_f26631de-4d53-47c9-822c-cbb2033e0b86'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--40a0a619--d38c--5879--89ae--a3eefd65fa41-osd--block--40a0a619--d38c--5879--89ae--a3eefd65fa41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH3T3i-Dn3M-XNKe-Lyl1-pcgd-aURa-0aARjI', 'scsi-0QEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9', 'scsi-SQEMU_QEMU_HARDDISK_5fa61c96-5ca4-4fa7-9393-6e2780ce67d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d', 'dm-uuid-LVM-OZhBBziM30Sv33izNUJCCpS1ZmIlNIDNGZMmdZdnb82chb3ij6QUzfbwJKZdIPA4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b', 'scsi-SQEMU_QEMU_HARDDISK_e952eadf-b7fa-49e6-b121-e808f2d1456b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787912 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1', 'dm-uuid-LVM-kgQ11RSuUfOfaFhh0TgRjAWKWH7JHXuUvHaRAgQy1WMqaNvnF3uD6Jn1dgDVgtwG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787945 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.787956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.787984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16', 'scsi-SQEMU_QEMU_HARDDISK_947aec13-e8a1-49a8-a984-efdbf69cffa9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--af7ee71e--f6e2--506a--9b19--157b61fbf28d-osd--block--af7ee71e--f6e2--506a--9b19--157b61fbf28d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EZ46eo-ukF5-k0SP-GANR-L15Q-lcyW-RFGZXD', 'scsi-0QEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b', 'scsi-SQEMU_QEMU_HARDDISK_df292424-6e82-4e61-a52c-dd60099c8b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee9e3135--eac7--54c9--a7bd--c984355157b1-osd--block--ee9e3135--eac7--54c9--a7bd--c984355157b1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ELZhyh-Homk-4KJX-dJ89-JbC9-K2tK-3FJ5f5', 'scsi-0QEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961', 'scsi-SQEMU_QEMU_HARDDISK_75267c96-c7d6-45ef-a5a6-94b8e66fe961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86', 'scsi-SQEMU_QEMU_HARDDISK_0531c1ed-639b-4ab3-bbe7-14f10d387a86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-11-14-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-11 14:58:34.788142 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788153 | orchestrator | 2025-06-11 14:58:34.788164 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-11 14:58:34.788175 | orchestrator | Wednesday 11 June 2025 14:56:41 +0000 (0:00:00.590) 0:00:16.244 ******** 2025-06-11 14:58:34.788186 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.788197 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.788208 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.788218 | orchestrator | 2025-06-11 14:58:34.788254 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-11 14:58:34.788270 | orchestrator | Wednesday 11 June 2025 14:56:42 +0000 (0:00:00.677) 0:00:16.922 ******** 2025-06-11 14:58:34.788281 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.788292 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.788303 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.788313 | orchestrator | 2025-06-11 14:58:34.788324 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-11 14:58:34.788334 | orchestrator | Wednesday 11 June 2025 14:56:42 +0000 (0:00:00.452) 0:00:17.375 ******** 2025-06-11 14:58:34.788345 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.788355 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.788366 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.788376 | orchestrator | 2025-06-11 14:58:34.788387 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-11 14:58:34.788397 | orchestrator | Wednesday 11 June 2025 14:56:43 +0000 (0:00:00.639) 0:00:18.014 ******** 2025-06-11 14:58:34.788408 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.788419 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.788430 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788440 | orchestrator | 2025-06-11 14:58:34.788451 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-11 14:58:34.788461 | orchestrator | Wednesday 11 June 2025 14:56:43 +0000 (0:00:00.296) 0:00:18.311 ******** 2025-06-11 14:58:34.788472 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.788483 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.788493 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788504 | orchestrator | 2025-06-11 14:58:34.788514 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-11 14:58:34.788525 | orchestrator | Wednesday 11 June 2025 14:56:44 +0000 (0:00:00.411) 0:00:18.722 ******** 2025-06-11 14:58:34.788535 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.788546 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.788556 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788567 | orchestrator | 2025-06-11 14:58:34.788578 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-11 14:58:34.788588 | orchestrator | Wednesday 11 June 2025 14:56:44 +0000 (0:00:00.470) 0:00:19.192 ******** 2025-06-11 14:58:34.788599 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-11 14:58:34.788610 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-11 14:58:34.788620 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-11 14:58:34.788631 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-11 14:58:34.788642 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-11 14:58:34.788652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-11 14:58:34.788662 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-11 14:58:34.788673 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-11 14:58:34.788690 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-11 14:58:34.788700 | orchestrator | 2025-06-11 14:58:34.788711 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-11 14:58:34.788721 | orchestrator | Wednesday 11 June 2025 14:56:45 +0000 (0:00:00.845) 0:00:20.038 ******** 2025-06-11 14:58:34.788732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-11 14:58:34.788743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-11 14:58:34.788753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-11 14:58:34.788764 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.788774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-11 14:58:34.788785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-11 14:58:34.788795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-11 14:58:34.788806 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.788817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-11 14:58:34.788827 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-11 14:58:34.788837 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-11 14:58:34.788848 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788859 | orchestrator | 2025-06-11 14:58:34.788870 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-11 14:58:34.788880 | orchestrator | Wednesday 11 June 2025 14:56:45 +0000 (0:00:00.336) 0:00:20.375 ******** 2025-06-11 14:58:34.788891 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 14:58:34.788902 | orchestrator | 2025-06-11 14:58:34.788913 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-11 14:58:34.788924 | orchestrator | Wednesday 11 June 2025 14:56:46 +0000 (0:00:00.656) 0:00:21.031 ******** 2025-06-11 14:58:34.788935 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.788946 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.788957 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.788967 | orchestrator | 2025-06-11 14:58:34.788984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-11 14:58:34.788996 | orchestrator | Wednesday 11 June 2025 14:56:46 +0000 (0:00:00.291) 0:00:21.322 ******** 2025-06-11 14:58:34.789006 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.789017 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.789027 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.789038 | orchestrator | 2025-06-11 14:58:34.789048 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-11 14:58:34.789059 | orchestrator | Wednesday 11 June 2025 14:56:47 +0000 (0:00:00.304) 0:00:21.627 ******** 2025-06-11 14:58:34.789069 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.789080 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.789090 | orchestrator | skipping: [testbed-node-5] 2025-06-11 14:58:34.789101 | orchestrator | 2025-06-11 14:58:34.789111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-11 14:58:34.789126 | orchestrator | Wednesday 11 June 2025 14:56:47 +0000 (0:00:00.303) 0:00:21.931 ******** 2025-06-11 14:58:34.789142 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.789161 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.789185 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.789208 | orchestrator | 2025-06-11 14:58:34.789285 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-11 14:58:34.789307 | orchestrator | Wednesday 11 June 2025 14:56:48 +0000 (0:00:00.583) 0:00:22.514 ******** 2025-06-11 14:58:34.789329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:58:34.789350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:58:34.789368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:58:34.789394 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.789405 | orchestrator | 2025-06-11 14:58:34.789416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-11 14:58:34.789426 | orchestrator | Wednesday 11 June 2025 14:56:48 +0000 (0:00:00.371) 0:00:22.886 ******** 2025-06-11 14:58:34.789437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:58:34.789448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:58:34.789458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:58:34.789469 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.789479 | orchestrator | 2025-06-11 14:58:34.789489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-11 14:58:34.789500 | orchestrator | Wednesday 11 June 2025 14:56:48 +0000 (0:00:00.353) 0:00:23.240 ******** 2025-06-11 14:58:34.789511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-11 14:58:34.789521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-11 14:58:34.789531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-11 14:58:34.789543 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.789559 | orchestrator | 2025-06-11 14:58:34.789578 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-11 14:58:34.789595 | orchestrator | Wednesday 11 June 2025 14:56:49 +0000 (0:00:00.356) 0:00:23.596 ******** 2025-06-11 14:58:34.789620 | orchestrator | ok: [testbed-node-3] 2025-06-11 14:58:34.789645 | orchestrator | ok: [testbed-node-4] 2025-06-11 14:58:34.789663 | orchestrator | ok: [testbed-node-5] 2025-06-11 14:58:34.789683 | orchestrator | 2025-06-11 14:58:34.789700 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-11 14:58:34.789716 | orchestrator | Wednesday 11 June 2025 14:56:49 +0000 (0:00:00.343) 0:00:23.940 ******** 2025-06-11 14:58:34.789726 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-11 14:58:34.789735 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-11 14:58:34.789745 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-11 14:58:34.789754 | orchestrator | 2025-06-11 14:58:34.789763 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-11 14:58:34.789773 | orchestrator | Wednesday 11 June 2025 14:56:50 +0000 (0:00:00.494) 0:00:24.434 ******** 2025-06-11 14:58:34.789782 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:58:34.789791 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:58:34.789801 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:58:34.789810 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-11 14:58:34.789819 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-11 14:58:34.789829 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-11 14:58:34.789909 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-11 14:58:34.789921 | orchestrator | 2025-06-11 14:58:34.789931 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-11 14:58:34.789941 | orchestrator | Wednesday 11 June 2025 14:56:51 +0000 (0:00:00.945) 0:00:25.380 ******** 2025-06-11 14:58:34.789951 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-11 14:58:34.789961 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-11 14:58:34.789970 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-11 14:58:34.789980 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-11 14:58:34.789990 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-11 14:58:34.790000 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-11 14:58:34.790061 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-11 14:58:34.790075 | orchestrator | 2025-06-11 14:58:34.790104 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-11 14:58:34.790120 | orchestrator | Wednesday 11 June 2025 14:56:52 +0000 (0:00:01.928) 0:00:27.308 ******** 2025-06-11 14:58:34.790137 | orchestrator | skipping: [testbed-node-3] 2025-06-11 14:58:34.790153 | orchestrator | skipping: [testbed-node-4] 2025-06-11 14:58:34.790170 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-11 14:58:34.790186 | orchestrator | 2025-06-11 14:58:34.790196 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-11 14:58:34.790205 | orchestrator | Wednesday 11 June 2025 14:56:53 +0000 (0:00:00.385) 0:00:27.694 ******** 2025-06-11 14:58:34.790244 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:58:34.790258 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:58:34.790269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:58:34.790278 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:58:34.790288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-11 14:58:34.790298 | orchestrator | 2025-06-11 14:58:34.790307 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-11 14:58:34.790317 | orchestrator | Wednesday 11 June 2025 14:57:38 +0000 (0:00:45.660) 0:01:13.355 ******** 2025-06-11 14:58:34.790326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790335 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790345 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790364 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790383 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-11 14:58:34.790392 | orchestrator | 2025-06-11 14:58:34.790401 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-11 14:58:34.790410 | orchestrator | Wednesday 11 June 2025 14:58:03 +0000 (0:00:24.792) 0:01:38.148 ******** 2025-06-11 14:58:34.790420 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790430 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790447 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790456 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790465 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790475 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790484 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-11 14:58:34.790493 | orchestrator | 2025-06-11 14:58:34.790503 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-11 14:58:34.790512 | orchestrator | Wednesday 11 June 2025 14:58:16 +0000 (0:00:13.172) 0:01:51.320 ******** 2025-06-11 14:58:34.790521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790531 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790540 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790549 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790559 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790568 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790594 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790603 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790623 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790632 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790641 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790650 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790665 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-11 14:58:34.790687 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-11 14:58:34.790704 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-11 14:58:34.790720 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-11 14:58:34.790736 | orchestrator | 2025-06-11 14:58:34.790752 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:58:34.790768 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-11 14:58:34.790787 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-11 14:58:34.790805 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-11 14:58:34.790822 | orchestrator | 2025-06-11 14:58:34.790832 | orchestrator | 2025-06-11 14:58:34.790842 | orchestrator | 2025-06-11 14:58:34.790851 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:58:34.790860 | orchestrator | Wednesday 11 June 2025 14:58:34 +0000 (0:00:17.527) 0:02:08.848 ******** 2025-06-11 14:58:34.790870 | orchestrator | =============================================================================== 2025-06-11 14:58:34.790879 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.66s 2025-06-11 14:58:34.790895 | orchestrator | generate keys ---------------------------------------------------------- 24.79s 2025-06-11 14:58:34.790905 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.53s 2025-06-11 14:58:34.790914 | orchestrator | get keys from monitors ------------------------------------------------- 13.17s 2025-06-11 14:58:34.790923 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.15s 2025-06-11 14:58:34.790933 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.93s 2025-06-11 14:58:34.790942 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2025-06-11 14:58:34.790951 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2025-06-11 14:58:34.790961 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-06-11 14:58:34.790970 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-06-11 14:58:34.790979 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2025-06-11 14:58:34.790988 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2025-06-11 14:58:34.790997 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.66s 2025-06-11 14:58:34.791007 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-06-11 14:58:34.791016 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-06-11 14:58:34.791025 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-06-11 14:58:34.791034 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.59s 2025-06-11 14:58:34.791044 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-06-11 14:58:34.791053 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-06-11 14:58:34.791062 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.54s 2025-06-11 14:58:34.791072 | orchestrator | 2025-06-11 14:58:34 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:34.791081 | orchestrator | 2025-06-11 14:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:37.828665 | orchestrator | 2025-06-11 14:58:37 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:37.829580 | orchestrator | 2025-06-11 14:58:37 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:37.831375 | orchestrator | 2025-06-11 14:58:37 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:37.831686 | orchestrator | 2025-06-11 14:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:40.872856 | orchestrator | 2025-06-11 14:58:40 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:40.874398 | orchestrator | 2025-06-11 14:58:40 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:40.876351 | orchestrator | 2025-06-11 14:58:40 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:40.876373 | orchestrator | 2025-06-11 14:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:43.913415 | orchestrator | 2025-06-11 14:58:43 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:43.913519 | orchestrator | 2025-06-11 14:58:43 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:43.916340 | orchestrator | 2025-06-11 14:58:43 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:43.916403 | orchestrator | 2025-06-11 14:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:46.961080 | orchestrator | 2025-06-11 14:58:46 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:46.962800 | orchestrator | 2025-06-11 14:58:46 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:46.964163 | orchestrator | 2025-06-11 14:58:46 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:46.964194 | orchestrator | 2025-06-11 14:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:50.000658 | orchestrator | 2025-06-11 14:58:49 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:50.001711 | orchestrator | 2025-06-11 14:58:50 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:50.003775 | orchestrator | 2025-06-11 14:58:50 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:50.003824 | orchestrator | 2025-06-11 14:58:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:53.051586 | orchestrator | 2025-06-11 14:58:53 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:53.053236 | orchestrator | 2025-06-11 14:58:53 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:53.054968 | orchestrator | 2025-06-11 14:58:53 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:53.055204 | orchestrator | 2025-06-11 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:56.096213 | orchestrator | 2025-06-11 14:58:56 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:56.096931 | orchestrator | 2025-06-11 14:58:56 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:56.098213 | orchestrator | 2025-06-11 14:58:56 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:56.098282 | orchestrator | 2025-06-11 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:58:59.136675 | orchestrator | 2025-06-11 14:58:59 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:58:59.137573 | orchestrator | 2025-06-11 14:58:59 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:58:59.138377 | orchestrator | 2025-06-11 14:58:59 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:58:59.138407 | orchestrator | 2025-06-11 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:02.180263 | orchestrator | 2025-06-11 14:59:02 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:59:02.182986 | orchestrator | 2025-06-11 14:59:02 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state STARTED 2025-06-11 14:59:02.191172 | orchestrator | 2025-06-11 14:59:02 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:02.191302 | orchestrator | 2025-06-11 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:05.246697 | orchestrator | 2025-06-11 14:59:05 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:05.248437 | orchestrator | 2025-06-11 14:59:05 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:59:05.249384 | orchestrator | 2025-06-11 14:59:05 | INFO  | Task 53f0de81-1828-4e96-b09c-b7cc036b7317 is in state SUCCESS 2025-06-11 14:59:05.251679 | orchestrator | 2025-06-11 14:59:05 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:05.251877 | orchestrator | 2025-06-11 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:08.302671 | orchestrator | 2025-06-11 14:59:08 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:08.304323 | orchestrator | 2025-06-11 14:59:08 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:59:08.305482 | orchestrator | 2025-06-11 14:59:08 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:08.305658 | orchestrator | 2025-06-11 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:11.347157 | orchestrator | 2025-06-11 14:59:11 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:11.347317 | orchestrator | 2025-06-11 14:59:11 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state STARTED 2025-06-11 14:59:11.351108 | orchestrator | 2025-06-11 14:59:11 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:11.351163 | orchestrator | 2025-06-11 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:14.401327 | orchestrator | 2025-06-11 14:59:14 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:14.405493 | orchestrator | 2025-06-11 14:59:14.405914 | orchestrator | 2025-06-11 14:59:14.405939 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-11 14:59:14.405953 | orchestrator | 2025-06-11 14:59:14.405965 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-11 14:59:14.405977 | orchestrator | Wednesday 11 June 2025 14:58:38 +0000 (0:00:00.140) 0:00:00.140 ******** 2025-06-11 14:59:14.406010 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-11 14:59:14.406073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406085 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 14:59:14.406107 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-11 14:59:14.406129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-11 14:59:14.406140 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-11 14:59:14.406150 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-11 14:59:14.406161 | orchestrator | 2025-06-11 14:59:14.406172 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-11 14:59:14.406183 | orchestrator | Wednesday 11 June 2025 14:58:42 +0000 (0:00:04.190) 0:00:04.331 ******** 2025-06-11 14:59:14.406194 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-11 14:59:14.406206 | orchestrator | 2025-06-11 14:59:14.406296 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-11 14:59:14.406309 | orchestrator | Wednesday 11 June 2025 14:58:43 +0000 (0:00:00.875) 0:00:05.206 ******** 2025-06-11 14:59:14.406319 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-11 14:59:14.406330 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406341 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406352 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 14:59:14.406363 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406397 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-11 14:59:14.406408 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-11 14:59:14.406419 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-11 14:59:14.406430 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-11 14:59:14.406441 | orchestrator | 2025-06-11 14:59:14.406451 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-11 14:59:14.406462 | orchestrator | Wednesday 11 June 2025 14:58:56 +0000 (0:00:12.861) 0:00:18.067 ******** 2025-06-11 14:59:14.406497 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-11 14:59:14.406509 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406519 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406548 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 14:59:14.406561 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-11 14:59:14.406573 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-11 14:59:14.406585 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-11 14:59:14.406597 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-11 14:59:14.406609 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-11 14:59:14.406620 | orchestrator | 2025-06-11 14:59:14.406632 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:59:14.406644 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 14:59:14.406658 | orchestrator | 2025-06-11 14:59:14.406676 | orchestrator | 2025-06-11 14:59:14.406696 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:59:14.406716 | orchestrator | Wednesday 11 June 2025 14:59:02 +0000 (0:00:06.385) 0:00:24.453 ******** 2025-06-11 14:59:14.406756 | orchestrator | =============================================================================== 2025-06-11 14:59:14.406807 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.86s 2025-06-11 14:59:14.406825 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.39s 2025-06-11 14:59:14.406844 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.19s 2025-06-11 14:59:14.406863 | orchestrator | Create share directory -------------------------------------------------- 0.88s 2025-06-11 14:59:14.406883 | orchestrator | 2025-06-11 14:59:14.406901 | orchestrator | 2025-06-11 14:59:14.406920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 14:59:14.406933 | orchestrator | 2025-06-11 14:59:14.406985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 14:59:14.406997 | orchestrator | Wednesday 11 June 2025 14:57:28 +0000 (0:00:00.257) 0:00:00.257 ******** 2025-06-11 14:59:14.407008 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.407019 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.407030 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.407047 | orchestrator | 2025-06-11 14:59:14.407066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 14:59:14.407083 | orchestrator | Wednesday 11 June 2025 14:57:28 +0000 (0:00:00.294) 0:00:00.551 ******** 2025-06-11 14:59:14.407102 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-11 14:59:14.407121 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-11 14:59:14.407138 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-11 14:59:14.407184 | orchestrator | 2025-06-11 14:59:14.407198 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-11 14:59:14.407254 | orchestrator | 2025-06-11 14:59:14.407266 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-11 14:59:14.407277 | orchestrator | Wednesday 11 June 2025 14:57:29 +0000 (0:00:00.398) 0:00:00.949 ******** 2025-06-11 14:59:14.407288 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:59:14.407299 | orchestrator | 2025-06-11 14:59:14.407310 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-11 14:59:14.407320 | orchestrator | Wednesday 11 June 2025 14:57:29 +0000 (0:00:00.494) 0:00:01.444 ******** 2025-06-11 14:59:14.407337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.407380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.407402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.407414 | orchestrator | 2025-06-11 14:59:14.407425 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-11 14:59:14.407441 | orchestrator | Wednesday 11 June 2025 14:57:30 +0000 (0:00:01.121) 0:00:02.565 ******** 2025-06-11 14:59:14.407453 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.407463 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.407474 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.407484 | orchestrator | 2025-06-11 14:59:14.407495 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-11 14:59:14.407506 | orchestrator | Wednesday 11 June 2025 14:57:31 +0000 (0:00:00.424) 0:00:02.989 ******** 2025-06-11 14:59:14.407516 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-11 14:59:14.407527 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-11 14:59:14.407544 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-11 14:59:14.407562 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-11 14:59:14.407573 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-11 14:59:14.407583 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-11 14:59:14.407594 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-11 14:59:14.407604 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-11 14:59:14.407615 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-11 14:59:14.407625 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-11 14:59:14.407636 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-11 14:59:14.407646 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-11 14:59:14.407657 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-11 14:59:14.407667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-11 14:59:14.407678 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-11 14:59:14.407689 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-11 14:59:14.407717 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-11 14:59:14.407728 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-11 14:59:14.407738 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-11 14:59:14.407749 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-11 14:59:14.407759 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-11 14:59:14.407770 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-11 14:59:14.407780 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-11 14:59:14.407791 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-11 14:59:14.407803 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-11 14:59:14.407814 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-11 14:59:14.407825 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-11 14:59:14.407836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-11 14:59:14.407866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-11 14:59:14.407877 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-11 14:59:14.407888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-11 14:59:14.407899 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-11 14:59:14.407909 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-11 14:59:14.407926 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-11 14:59:14.407937 | orchestrator | 2025-06-11 14:59:14.407948 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.407963 | orchestrator | Wednesday 11 June 2025 14:57:31 +0000 (0:00:00.720) 0:00:03.710 ******** 2025-06-11 14:59:14.407974 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.407985 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.407995 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408006 | orchestrator | 2025-06-11 14:59:14.408017 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408027 | orchestrator | Wednesday 11 June 2025 14:57:32 +0000 (0:00:00.312) 0:00:04.023 ******** 2025-06-11 14:59:14.408038 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408049 | orchestrator | 2025-06-11 14:59:14.408060 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408077 | orchestrator | Wednesday 11 June 2025 14:57:32 +0000 (0:00:00.115) 0:00:04.139 ******** 2025-06-11 14:59:14.408088 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408099 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.408109 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.408120 | orchestrator | 2025-06-11 14:59:14.408130 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.408141 | orchestrator | Wednesday 11 June 2025 14:57:32 +0000 (0:00:00.441) 0:00:04.580 ******** 2025-06-11 14:59:14.408152 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.408162 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.408173 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408183 | orchestrator | 2025-06-11 14:59:14.408194 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408205 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.316) 0:00:04.896 ******** 2025-06-11 14:59:14.408232 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408244 | orchestrator | 2025-06-11 14:59:14.408254 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408265 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.134) 0:00:05.030 ******** 2025-06-11 14:59:14.408276 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408286 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.408297 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.408307 | orchestrator | 2025-06-11 14:59:14.408318 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.408328 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.274) 0:00:05.305 ******** 2025-06-11 14:59:14.408339 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.408350 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.408360 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408371 | orchestrator | 2025-06-11 14:59:14.408382 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408393 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.320) 0:00:05.626 ******** 2025-06-11 14:59:14.408403 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408414 | orchestrator | 2025-06-11 14:59:14.408425 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408435 | orchestrator | Wednesday 11 June 2025 14:57:34 +0000 (0:00:00.327) 0:00:05.953 ******** 2025-06-11 14:59:14.408446 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408457 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.408467 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.408478 | orchestrator | 2025-06-11 14:59:14.408488 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.408499 | orchestrator | Wednesday 11 June 2025 14:57:34 +0000 (0:00:00.316) 0:00:06.269 ******** 2025-06-11 14:59:14.408517 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.408527 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.408538 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408548 | orchestrator | 2025-06-11 14:59:14.408559 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408570 | orchestrator | Wednesday 11 June 2025 14:57:34 +0000 (0:00:00.322) 0:00:06.592 ******** 2025-06-11 14:59:14.408580 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408591 | orchestrator | 2025-06-11 14:59:14.408601 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408612 | orchestrator | Wednesday 11 June 2025 14:57:34 +0000 (0:00:00.117) 0:00:06.710 ******** 2025-06-11 14:59:14.408622 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408633 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.408643 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.408653 | orchestrator | 2025-06-11 14:59:14.408664 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.408675 | orchestrator | Wednesday 11 June 2025 14:57:35 +0000 (0:00:00.264) 0:00:06.974 ******** 2025-06-11 14:59:14.408701 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.408713 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.408724 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408735 | orchestrator | 2025-06-11 14:59:14.408746 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408756 | orchestrator | Wednesday 11 June 2025 14:57:35 +0000 (0:00:00.512) 0:00:07.487 ******** 2025-06-11 14:59:14.408767 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408777 | orchestrator | 2025-06-11 14:59:14.408788 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408799 | orchestrator | Wednesday 11 June 2025 14:57:35 +0000 (0:00:00.129) 0:00:07.616 ******** 2025-06-11 14:59:14.408809 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408820 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.408830 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.408841 | orchestrator | 2025-06-11 14:59:14.408852 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.408862 | orchestrator | Wednesday 11 June 2025 14:57:36 +0000 (0:00:00.308) 0:00:07.925 ******** 2025-06-11 14:59:14.408873 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.408884 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.408894 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.408904 | orchestrator | 2025-06-11 14:59:14.408915 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.408926 | orchestrator | Wednesday 11 June 2025 14:57:36 +0000 (0:00:00.299) 0:00:08.225 ******** 2025-06-11 14:59:14.408936 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408947 | orchestrator | 2025-06-11 14:59:14.408963 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.408973 | orchestrator | Wednesday 11 June 2025 14:57:36 +0000 (0:00:00.138) 0:00:08.363 ******** 2025-06-11 14:59:14.408984 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.408995 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.409005 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.409016 | orchestrator | 2025-06-11 14:59:14.409026 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.409037 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:00.593) 0:00:08.956 ******** 2025-06-11 14:59:14.409048 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.409058 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.409069 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.409080 | orchestrator | 2025-06-11 14:59:14.409097 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.409109 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:00.341) 0:00:09.297 ******** 2025-06-11 14:59:14.409126 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409137 | orchestrator | 2025-06-11 14:59:14.409147 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.409158 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:00.137) 0:00:09.435 ******** 2025-06-11 14:59:14.409169 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409179 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.409191 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.409277 | orchestrator | 2025-06-11 14:59:14.409304 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.409316 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:00.314) 0:00:09.750 ******** 2025-06-11 14:59:14.409327 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.409337 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.409348 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.409359 | orchestrator | 2025-06-11 14:59:14.409369 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.409380 | orchestrator | Wednesday 11 June 2025 14:57:38 +0000 (0:00:00.390) 0:00:10.140 ******** 2025-06-11 14:59:14.409391 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409402 | orchestrator | 2025-06-11 14:59:14.409447 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.409458 | orchestrator | Wednesday 11 June 2025 14:57:38 +0000 (0:00:00.136) 0:00:10.276 ******** 2025-06-11 14:59:14.409469 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409480 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.409491 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.409501 | orchestrator | 2025-06-11 14:59:14.409512 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.409523 | orchestrator | Wednesday 11 June 2025 14:57:39 +0000 (0:00:00.525) 0:00:10.801 ******** 2025-06-11 14:59:14.409534 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.409544 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.409555 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.409566 | orchestrator | 2025-06-11 14:59:14.409578 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.409596 | orchestrator | Wednesday 11 June 2025 14:57:39 +0000 (0:00:00.417) 0:00:11.219 ******** 2025-06-11 14:59:14.409615 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409632 | orchestrator | 2025-06-11 14:59:14.409648 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.409663 | orchestrator | Wednesday 11 June 2025 14:57:39 +0000 (0:00:00.152) 0:00:11.371 ******** 2025-06-11 14:59:14.409679 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409693 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.409706 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.409720 | orchestrator | 2025-06-11 14:59:14.409737 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-11 14:59:14.409753 | orchestrator | Wednesday 11 June 2025 14:57:39 +0000 (0:00:00.304) 0:00:11.676 ******** 2025-06-11 14:59:14.409770 | orchestrator | ok: [testbed-node-0] 2025-06-11 14:59:14.409787 | orchestrator | ok: [testbed-node-1] 2025-06-11 14:59:14.409802 | orchestrator | ok: [testbed-node-2] 2025-06-11 14:59:14.409818 | orchestrator | 2025-06-11 14:59:14.409831 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-11 14:59:14.409841 | orchestrator | Wednesday 11 June 2025 14:57:40 +0000 (0:00:00.484) 0:00:12.161 ******** 2025-06-11 14:59:14.409850 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409860 | orchestrator | 2025-06-11 14:59:14.409869 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-11 14:59:14.409879 | orchestrator | Wednesday 11 June 2025 14:57:40 +0000 (0:00:00.116) 0:00:12.277 ******** 2025-06-11 14:59:14.409888 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.409897 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.409907 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.409926 | orchestrator | 2025-06-11 14:59:14.409935 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-11 14:59:14.409945 | orchestrator | Wednesday 11 June 2025 14:57:40 +0000 (0:00:00.313) 0:00:12.590 ******** 2025-06-11 14:59:14.409954 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:59:14.409964 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:59:14.409973 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:59:14.409982 | orchestrator | 2025-06-11 14:59:14.409992 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-11 14:59:14.410055 | orchestrator | Wednesday 11 June 2025 14:57:42 +0000 (0:00:01.480) 0:00:14.071 ******** 2025-06-11 14:59:14.410065 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-11 14:59:14.410075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-11 14:59:14.410085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-11 14:59:14.410094 | orchestrator | 2025-06-11 14:59:14.410104 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-11 14:59:14.410113 | orchestrator | Wednesday 11 June 2025 14:57:44 +0000 (0:00:02.226) 0:00:16.298 ******** 2025-06-11 14:59:14.410135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-11 14:59:14.410146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-11 14:59:14.410155 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-11 14:59:14.410165 | orchestrator | 2025-06-11 14:59:14.410175 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-11 14:59:14.410184 | orchestrator | Wednesday 11 June 2025 14:57:46 +0000 (0:00:02.282) 0:00:18.581 ******** 2025-06-11 14:59:14.410204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-11 14:59:14.410232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-11 14:59:14.410242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-11 14:59:14.410251 | orchestrator | 2025-06-11 14:59:14.410261 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-11 14:59:14.410270 | orchestrator | Wednesday 11 June 2025 14:57:48 +0000 (0:00:01.538) 0:00:20.119 ******** 2025-06-11 14:59:14.410280 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.410289 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.410298 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.410308 | orchestrator | 2025-06-11 14:59:14.410317 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-11 14:59:14.410327 | orchestrator | Wednesday 11 June 2025 14:57:48 +0000 (0:00:00.317) 0:00:20.437 ******** 2025-06-11 14:59:14.410336 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.410345 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.410355 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.410364 | orchestrator | 2025-06-11 14:59:14.410374 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-11 14:59:14.410383 | orchestrator | Wednesday 11 June 2025 14:57:48 +0000 (0:00:00.263) 0:00:20.701 ******** 2025-06-11 14:59:14.410392 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:59:14.410402 | orchestrator | 2025-06-11 14:59:14.410411 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-11 14:59:14.410421 | orchestrator | Wednesday 11 June 2025 14:57:49 +0000 (0:00:00.741) 0:00:21.442 ******** 2025-06-11 14:59:14.410433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410500 | orchestrator | 2025-06-11 14:59:14.410510 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-11 14:59:14.410520 | orchestrator | Wednesday 11 June 2025 14:57:51 +0000 (0:00:01.520) 0:00:22.963 ******** 2025-06-11 14:59:14.410538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE2025-06-11 14:59:14 | INFO  | Task a4548ea9-a6ed-41ca-9361-f49158ee7bdc is in state SUCCESS 2025-06-11 14:59:14.410552 | orchestrator | ': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410570 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.410591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410603 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.410619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410645 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.410661 | orchestrator | 2025-06-11 14:59:14.410672 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-11 14:59:14.410681 | orchestrator | Wednesday 11 June 2025 14:57:51 +0000 (0:00:00.615) 0:00:23.578 ******** 2025-06-11 14:59:14.410707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410718 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.410729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410745 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.410768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-11 14:59:14.410786 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.410795 | orchestrator | 2025-06-11 14:59:14.410805 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-11 14:59:14.410814 | orchestrator | Wednesday 11 June 2025 14:57:52 +0000 (0:00:01.055) 0:00:24.634 ******** 2025-06-11 14:59:14.410825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-11 14:59:14.410882 | orchestrator | 2025-06-11 14:59:14.410892 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-11 14:59:14.410901 | orchestrator | Wednesday 11 June 2025 14:57:54 +0000 (0:00:01.420) 0:00:26.054 ******** 2025-06-11 14:59:14.410915 | orchestrator | skipping: [testbed-node-0] 2025-06-11 14:59:14.410925 | orchestrator | skipping: [testbed-node-1] 2025-06-11 14:59:14.410934 | orchestrator | skipping: [testbed-node-2] 2025-06-11 14:59:14.410943 | orchestrator | 2025-06-11 14:59:14.410968 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-11 14:59:14.410978 | orchestrator | Wednesday 11 June 2025 14:57:54 +0000 (0:00:00.321) 0:00:26.376 ******** 2025-06-11 14:59:14.410988 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 14:59:14.410997 | orchestrator | 2025-06-11 14:59:14.411007 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-11 14:59:14.411023 | orchestrator | Wednesday 11 June 2025 14:57:55 +0000 (0:00:00.701) 0:00:27.077 ******** 2025-06-11 14:59:14.411033 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:59:14.411042 | orchestrator | 2025-06-11 14:59:14.411052 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-11 14:59:14.411071 | orchestrator | Wednesday 11 June 2025 14:57:57 +0000 (0:00:02.218) 0:00:29.296 ******** 2025-06-11 14:59:14.411081 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:59:14.411090 | orchestrator | 2025-06-11 14:59:14.411099 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-11 14:59:14.411109 | orchestrator | Wednesday 11 June 2025 14:57:59 +0000 (0:00:02.041) 0:00:31.337 ******** 2025-06-11 14:59:14.411118 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:59:14.411127 | orchestrator | 2025-06-11 14:59:14.411137 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-11 14:59:14.411146 | orchestrator | Wednesday 11 June 2025 14:58:15 +0000 (0:00:16.171) 0:00:47.509 ******** 2025-06-11 14:59:14.411156 | orchestrator | 2025-06-11 14:59:14.411165 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-11 14:59:14.411174 | orchestrator | Wednesday 11 June 2025 14:58:15 +0000 (0:00:00.065) 0:00:47.574 ******** 2025-06-11 14:59:14.411184 | orchestrator | 2025-06-11 14:59:14.411193 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-11 14:59:14.411203 | orchestrator | Wednesday 11 June 2025 14:58:15 +0000 (0:00:00.063) 0:00:47.637 ******** 2025-06-11 14:59:14.411228 | orchestrator | 2025-06-11 14:59:14.411239 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-11 14:59:14.411248 | orchestrator | Wednesday 11 June 2025 14:58:15 +0000 (0:00:00.081) 0:00:47.719 ******** 2025-06-11 14:59:14.411257 | orchestrator | changed: [testbed-node-0] 2025-06-11 14:59:14.411267 | orchestrator | changed: [testbed-node-1] 2025-06-11 14:59:14.411276 | orchestrator | changed: [testbed-node-2] 2025-06-11 14:59:14.411285 | orchestrator | 2025-06-11 14:59:14.411295 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 14:59:14.411305 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-11 14:59:14.411315 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-11 14:59:14.411325 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-11 14:59:14.411334 | orchestrator | 2025-06-11 14:59:14.411343 | orchestrator | 2025-06-11 14:59:14.411353 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 14:59:14.411362 | orchestrator | Wednesday 11 June 2025 14:59:13 +0000 (0:00:57.862) 0:01:45.581 ******** 2025-06-11 14:59:14.411372 | orchestrator | =============================================================================== 2025-06-11 14:59:14.411381 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.86s 2025-06-11 14:59:14.411390 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.17s 2025-06-11 14:59:14.411400 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.28s 2025-06-11 14:59:14.411409 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.23s 2025-06-11 14:59:14.411418 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.22s 2025-06-11 14:59:14.411428 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.04s 2025-06-11 14:59:14.411437 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2025-06-11 14:59:14.411447 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2025-06-11 14:59:14.411456 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.48s 2025-06-11 14:59:14.411465 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.42s 2025-06-11 14:59:14.411475 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.12s 2025-06-11 14:59:14.411484 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.06s 2025-06-11 14:59:14.411504 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-11 14:59:14.411514 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-06-11 14:59:14.411523 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-06-11 14:59:14.411533 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.62s 2025-06-11 14:59:14.411542 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2025-06-11 14:59:14.411551 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-06-11 14:59:14.411561 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-06-11 14:59:14.411575 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2025-06-11 14:59:14.411585 | orchestrator | 2025-06-11 14:59:14 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:14.411594 | orchestrator | 2025-06-11 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:17.450599 | orchestrator | 2025-06-11 14:59:17 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:17.451937 | orchestrator | 2025-06-11 14:59:17 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:17.452265 | orchestrator | 2025-06-11 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:20.486362 | orchestrator | 2025-06-11 14:59:20 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:20.487464 | orchestrator | 2025-06-11 14:59:20 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:20.487513 | orchestrator | 2025-06-11 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:23.525501 | orchestrator | 2025-06-11 14:59:23 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:23.527635 | orchestrator | 2025-06-11 14:59:23 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:23.527667 | orchestrator | 2025-06-11 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:26.569609 | orchestrator | 2025-06-11 14:59:26 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:26.570315 | orchestrator | 2025-06-11 14:59:26 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:26.570350 | orchestrator | 2025-06-11 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:29.608937 | orchestrator | 2025-06-11 14:59:29 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:29.609772 | orchestrator | 2025-06-11 14:59:29 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:29.609803 | orchestrator | 2025-06-11 14:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:32.648437 | orchestrator | 2025-06-11 14:59:32 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:32.650442 | orchestrator | 2025-06-11 14:59:32 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:32.650538 | orchestrator | 2025-06-11 14:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:35.696004 | orchestrator | 2025-06-11 14:59:35 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:35.697510 | orchestrator | 2025-06-11 14:59:35 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:35.697885 | orchestrator | 2025-06-11 14:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:38.739455 | orchestrator | 2025-06-11 14:59:38 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:38.741880 | orchestrator | 2025-06-11 14:59:38 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:38.741916 | orchestrator | 2025-06-11 14:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:41.789769 | orchestrator | 2025-06-11 14:59:41 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:41.790821 | orchestrator | 2025-06-11 14:59:41 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:41.790855 | orchestrator | 2025-06-11 14:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:44.833385 | orchestrator | 2025-06-11 14:59:44 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:44.833810 | orchestrator | 2025-06-11 14:59:44 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:44.833855 | orchestrator | 2025-06-11 14:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:47.875030 | orchestrator | 2025-06-11 14:59:47 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:47.875387 | orchestrator | 2025-06-11 14:59:47 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:47.875416 | orchestrator | 2025-06-11 14:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:50.919152 | orchestrator | 2025-06-11 14:59:50 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:50.920361 | orchestrator | 2025-06-11 14:59:50 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:50.920410 | orchestrator | 2025-06-11 14:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:53.967956 | orchestrator | 2025-06-11 14:59:53 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:53.970085 | orchestrator | 2025-06-11 14:59:53 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:53.970137 | orchestrator | 2025-06-11 14:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 14:59:57.017583 | orchestrator | 2025-06-11 14:59:57 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 14:59:57.018391 | orchestrator | 2025-06-11 14:59:57 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 14:59:57.018808 | orchestrator | 2025-06-11 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:00.059402 | orchestrator | 2025-06-11 15:00:00 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state STARTED 2025-06-11 15:00:00.060797 | orchestrator | 2025-06-11 15:00:00 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 15:00:00.060847 | orchestrator | 2025-06-11 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:03.105234 | orchestrator | 2025-06-11 15:00:03 | INFO  | Task b1a18542-ca63-4e70-91a3-2f1cbd0b43bf is in state SUCCESS 2025-06-11 15:00:03.105356 | orchestrator | 2025-06-11 15:00:03 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:03.105516 | orchestrator | 2025-06-11 15:00:03 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:03.107263 | orchestrator | 2025-06-11 15:00:03 | INFO  | Task 15d91a6a-cbe9-4105-9760-bae761b224e7 is in state STARTED 2025-06-11 15:00:03.108579 | orchestrator | 2025-06-11 15:00:03 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 15:00:03.108660 | orchestrator | 2025-06-11 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:06.168387 | orchestrator | 2025-06-11 15:00:06 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:06.169487 | orchestrator | 2025-06-11 15:00:06 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:06.170593 | orchestrator | 2025-06-11 15:00:06 | INFO  | Task 15d91a6a-cbe9-4105-9760-bae761b224e7 is in state STARTED 2025-06-11 15:00:06.172099 | orchestrator | 2025-06-11 15:00:06 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state STARTED 2025-06-11 15:00:06.172133 | orchestrator | 2025-06-11 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:09.197480 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:09.198003 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:09.199097 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:09.200045 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:09.200172 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:09.201067 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 15d91a6a-cbe9-4105-9760-bae761b224e7 is in state SUCCESS 2025-06-11 15:00:09.202683 | orchestrator | 2025-06-11 15:00:09 | INFO  | Task 078aaf64-cacb-4643-a15e-106825369e3a is in state SUCCESS 2025-06-11 15:00:09.203076 | orchestrator | 2025-06-11 15:00:09.203104 | orchestrator | 2025-06-11 15:00:09.203125 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-11 15:00:09.203145 | orchestrator | 2025-06-11 15:00:09.203157 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-11 15:00:09.203168 | orchestrator | Wednesday 11 June 2025 14:59:06 +0000 (0:00:00.232) 0:00:00.232 ******** 2025-06-11 15:00:09.203180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-11 15:00:09.203191 | orchestrator | 2025-06-11 15:00:09.203279 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-11 15:00:09.203291 | orchestrator | Wednesday 11 June 2025 14:59:07 +0000 (0:00:00.221) 0:00:00.454 ******** 2025-06-11 15:00:09.203303 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-11 15:00:09.203313 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-11 15:00:09.203325 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-11 15:00:09.203337 | orchestrator | 2025-06-11 15:00:09.203360 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-11 15:00:09.203372 | orchestrator | Wednesday 11 June 2025 14:59:08 +0000 (0:00:01.166) 0:00:01.620 ******** 2025-06-11 15:00:09.203382 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-11 15:00:09.203393 | orchestrator | 2025-06-11 15:00:09.203404 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-11 15:00:09.203415 | orchestrator | Wednesday 11 June 2025 14:59:09 +0000 (0:00:01.166) 0:00:02.787 ******** 2025-06-11 15:00:09.203687 | orchestrator | changed: [testbed-manager] 2025-06-11 15:00:09.203710 | orchestrator | 2025-06-11 15:00:09.203732 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-11 15:00:09.203747 | orchestrator | Wednesday 11 June 2025 14:59:10 +0000 (0:00:00.996) 0:00:03.783 ******** 2025-06-11 15:00:09.203759 | orchestrator | changed: [testbed-manager] 2025-06-11 15:00:09.203793 | orchestrator | 2025-06-11 15:00:09.203806 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-11 15:00:09.204862 | orchestrator | Wednesday 11 June 2025 14:59:11 +0000 (0:00:00.864) 0:00:04.647 ******** 2025-06-11 15:00:09.204880 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-11 15:00:09.204890 | orchestrator | ok: [testbed-manager] 2025-06-11 15:00:09.204902 | orchestrator | 2025-06-11 15:00:09.204913 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-11 15:00:09.204924 | orchestrator | Wednesday 11 June 2025 14:59:52 +0000 (0:00:40.860) 0:00:45.508 ******** 2025-06-11 15:00:09.204934 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-11 15:00:09.204946 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-11 15:00:09.204957 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-11 15:00:09.204968 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-11 15:00:09.204978 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-11 15:00:09.204989 | orchestrator | 2025-06-11 15:00:09.204999 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-11 15:00:09.205010 | orchestrator | Wednesday 11 June 2025 14:59:56 +0000 (0:00:03.966) 0:00:49.475 ******** 2025-06-11 15:00:09.205021 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-11 15:00:09.205032 | orchestrator | 2025-06-11 15:00:09.205042 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-11 15:00:09.205053 | orchestrator | Wednesday 11 June 2025 14:59:56 +0000 (0:00:00.430) 0:00:49.905 ******** 2025-06-11 15:00:09.205064 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:00:09.205074 | orchestrator | 2025-06-11 15:00:09.205085 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-11 15:00:09.205096 | orchestrator | Wednesday 11 June 2025 14:59:56 +0000 (0:00:00.134) 0:00:50.040 ******** 2025-06-11 15:00:09.205106 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:00:09.205117 | orchestrator | 2025-06-11 15:00:09.205127 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-11 15:00:09.205138 | orchestrator | Wednesday 11 June 2025 14:59:56 +0000 (0:00:00.290) 0:00:50.331 ******** 2025-06-11 15:00:09.205149 | orchestrator | changed: [testbed-manager] 2025-06-11 15:00:09.205159 | orchestrator | 2025-06-11 15:00:09.205170 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-11 15:00:09.205181 | orchestrator | Wednesday 11 June 2025 14:59:58 +0000 (0:00:01.608) 0:00:51.939 ******** 2025-06-11 15:00:09.205191 | orchestrator | changed: [testbed-manager] 2025-06-11 15:00:09.205225 | orchestrator | 2025-06-11 15:00:09.205236 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-11 15:00:09.205247 | orchestrator | Wednesday 11 June 2025 14:59:59 +0000 (0:00:00.682) 0:00:52.621 ******** 2025-06-11 15:00:09.205258 | orchestrator | changed: [testbed-manager] 2025-06-11 15:00:09.205269 | orchestrator | 2025-06-11 15:00:09.205279 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-11 15:00:09.205290 | orchestrator | Wednesday 11 June 2025 14:59:59 +0000 (0:00:00.575) 0:00:53.196 ******** 2025-06-11 15:00:09.205301 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-11 15:00:09.205311 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-11 15:00:09.205322 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-11 15:00:09.205333 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-11 15:00:09.205344 | orchestrator | 2025-06-11 15:00:09.205354 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:00:09.205365 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:00:09.205377 | orchestrator | 2025-06-11 15:00:09.205388 | orchestrator | 2025-06-11 15:00:09.205441 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:00:09.205469 | orchestrator | Wednesday 11 June 2025 15:00:01 +0000 (0:00:01.488) 0:00:54.685 ******** 2025-06-11 15:00:09.205482 | orchestrator | =============================================================================== 2025-06-11 15:00:09.205495 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.86s 2025-06-11 15:00:09.205508 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.97s 2025-06-11 15:00:09.205520 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.61s 2025-06-11 15:00:09.205532 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2025-06-11 15:00:09.205545 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-06-11 15:00:09.205557 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2025-06-11 15:00:09.205570 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2025-06-11 15:00:09.205588 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.86s 2025-06-11 15:00:09.205601 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.68s 2025-06-11 15:00:09.205613 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-06-11 15:00:09.205625 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-06-11 15:00:09.205638 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-11 15:00:09.205650 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-06-11 15:00:09.205663 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-06-11 15:00:09.205675 | orchestrator | 2025-06-11 15:00:09.205687 | orchestrator | 2025-06-11 15:00:09.205700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:00:09.205713 | orchestrator | 2025-06-11 15:00:09.205725 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:00:09.205738 | orchestrator | Wednesday 11 June 2025 15:00:05 +0000 (0:00:00.157) 0:00:00.157 ******** 2025-06-11 15:00:09.205751 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.205764 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.205776 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.205789 | orchestrator | 2025-06-11 15:00:09.205800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:00:09.205810 | orchestrator | Wednesday 11 June 2025 15:00:05 +0000 (0:00:00.289) 0:00:00.446 ******** 2025-06-11 15:00:09.205821 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-11 15:00:09.205832 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-11 15:00:09.205843 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-11 15:00:09.205853 | orchestrator | 2025-06-11 15:00:09.205864 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-11 15:00:09.205875 | orchestrator | 2025-06-11 15:00:09.205885 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-11 15:00:09.205896 | orchestrator | Wednesday 11 June 2025 15:00:06 +0000 (0:00:00.686) 0:00:01.133 ******** 2025-06-11 15:00:09.205907 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.205917 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.205928 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.205938 | orchestrator | 2025-06-11 15:00:09.205949 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:00:09.205960 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:00:09.205971 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:00:09.205982 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:00:09.205999 | orchestrator | 2025-06-11 15:00:09.206009 | orchestrator | 2025-06-11 15:00:09.206074 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:00:09.206085 | orchestrator | Wednesday 11 June 2025 15:00:07 +0000 (0:00:00.715) 0:00:01.849 ******** 2025-06-11 15:00:09.206096 | orchestrator | =============================================================================== 2025-06-11 15:00:09.206107 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-06-11 15:00:09.206117 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-06-11 15:00:09.206128 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-11 15:00:09.206139 | orchestrator | 2025-06-11 15:00:09.206150 | orchestrator | 2025-06-11 15:00:09.206161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:00:09.206171 | orchestrator | 2025-06-11 15:00:09.206182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:00:09.206193 | orchestrator | Wednesday 11 June 2025 14:57:28 +0000 (0:00:00.257) 0:00:00.257 ******** 2025-06-11 15:00:09.206225 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.206236 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.206246 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.206257 | orchestrator | 2025-06-11 15:00:09.206268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:00:09.206278 | orchestrator | Wednesday 11 June 2025 14:57:28 +0000 (0:00:00.277) 0:00:00.535 ******** 2025-06-11 15:00:09.206289 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-11 15:00:09.206300 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-11 15:00:09.206311 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-11 15:00:09.206321 | orchestrator | 2025-06-11 15:00:09.206332 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-11 15:00:09.206343 | orchestrator | 2025-06-11 15:00:09.206391 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.206404 | orchestrator | Wednesday 11 June 2025 14:57:29 +0000 (0:00:00.403) 0:00:00.939 ******** 2025-06-11 15:00:09.206415 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:00:09.206426 | orchestrator | 2025-06-11 15:00:09.206437 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-11 15:00:09.206448 | orchestrator | Wednesday 11 June 2025 14:57:29 +0000 (0:00:00.512) 0:00:01.452 ******** 2025-06-11 15:00:09.206468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206638 | orchestrator | 2025-06-11 15:00:09.206649 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-11 15:00:09.206661 | orchestrator | Wednesday 11 June 2025 14:57:31 +0000 (0:00:01.778) 0:00:03.231 ******** 2025-06-11 15:00:09.206672 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-11 15:00:09.206682 | orchestrator | 2025-06-11 15:00:09.206693 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-11 15:00:09.206704 | orchestrator | Wednesday 11 June 2025 14:57:32 +0000 (0:00:00.847) 0:00:04.078 ******** 2025-06-11 15:00:09.206714 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.206725 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.206736 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.206746 | orchestrator | 2025-06-11 15:00:09.206757 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-11 15:00:09.206768 | orchestrator | Wednesday 11 June 2025 14:57:32 +0000 (0:00:00.468) 0:00:04.547 ******** 2025-06-11 15:00:09.206778 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:00:09.206789 | orchestrator | 2025-06-11 15:00:09.206799 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.206810 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.671) 0:00:05.219 ******** 2025-06-11 15:00:09.206821 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:00:09.206832 | orchestrator | 2025-06-11 15:00:09.206848 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-11 15:00:09.206859 | orchestrator | Wednesday 11 June 2025 14:57:33 +0000 (0:00:00.504) 0:00:05.723 ******** 2025-06-11 15:00:09.206880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.206925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.206992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207015 | orchestrator | 2025-06-11 15:00:09.207026 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-11 15:00:09.207036 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:03.393) 0:00:09.116 ******** 2025-06-11 15:00:09.207048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207100 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.207112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207147 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.207165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207252 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.207272 | orchestrator | 2025-06-11 15:00:09.207291 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-11 15:00:09.207303 | orchestrator | Wednesday 11 June 2025 14:57:37 +0000 (0:00:00.546) 0:00:09.663 ******** 2025-06-11 15:00:09.207314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207349 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.207373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207416 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.207427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-11 15:00:09.207439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-11 15:00:09.207477 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.207488 | orchestrator | 2025-06-11 15:00:09.207498 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-11 15:00:09.207509 | orchestrator | Wednesday 11 June 2025 14:57:38 +0000 (0:00:00.795) 0:00:10.458 ******** 2025-06-11 15:00:09.207525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207647 | orchestrator | 2025-06-11 15:00:09.207658 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-11 15:00:09.207669 | orchestrator | Wednesday 11 June 2025 14:57:42 +0000 (0:00:03.398) 0:00:13.857 ******** 2025-06-11 15:00:09.207687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.207767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.207786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.207825 | orchestrator | 2025-06-11 15:00:09.207836 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-11 15:00:09.207847 | orchestrator | Wednesday 11 June 2025 14:57:47 +0000 (0:00:05.463) 0:00:19.320 ******** 2025-06-11 15:00:09.207858 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.207869 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:00:09.207880 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:00:09.207890 | orchestrator | 2025-06-11 15:00:09.207901 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-11 15:00:09.207912 | orchestrator | Wednesday 11 June 2025 14:57:48 +0000 (0:00:01.394) 0:00:20.715 ******** 2025-06-11 15:00:09.207922 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.207933 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.207943 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.207954 | orchestrator | 2025-06-11 15:00:09.207964 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-11 15:00:09.207975 | orchestrator | Wednesday 11 June 2025 14:57:49 +0000 (0:00:00.525) 0:00:21.240 ******** 2025-06-11 15:00:09.207986 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.207996 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.208007 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.208017 | orchestrator | 2025-06-11 15:00:09.208028 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-11 15:00:09.208045 | orchestrator | Wednesday 11 June 2025 14:57:49 +0000 (0:00:00.458) 0:00:21.699 ******** 2025-06-11 15:00:09.208056 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.208066 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.208077 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.208088 | orchestrator | 2025-06-11 15:00:09.208099 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-11 15:00:09.208109 | orchestrator | Wednesday 11 June 2025 14:57:50 +0000 (0:00:00.336) 0:00:22.035 ******** 2025-06-11 15:00:09.208121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.208139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.208151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.208163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.208238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.208262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-11 15:00:09.208282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.208299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.208310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.208321 | orchestrator | 2025-06-11 15:00:09.208332 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.208343 | orchestrator | Wednesday 11 June 2025 14:57:52 +0000 (0:00:02.398) 0:00:24.433 ******** 2025-06-11 15:00:09.208354 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.208364 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.208375 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.208385 | orchestrator | 2025-06-11 15:00:09.208404 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-11 15:00:09.208422 | orchestrator | Wednesday 11 June 2025 14:57:52 +0000 (0:00:00.298) 0:00:24.732 ******** 2025-06-11 15:00:09.208449 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-11 15:00:09.208468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-11 15:00:09.208488 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-11 15:00:09.208506 | orchestrator | 2025-06-11 15:00:09.208524 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-11 15:00:09.208536 | orchestrator | Wednesday 11 June 2025 14:57:55 +0000 (0:00:02.076) 0:00:26.809 ******** 2025-06-11 15:00:09.208547 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:00:09.208558 | orchestrator | 2025-06-11 15:00:09.208568 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-11 15:00:09.208579 | orchestrator | Wednesday 11 June 2025 14:57:55 +0000 (0:00:00.885) 0:00:27.695 ******** 2025-06-11 15:00:09.208589 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.208600 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.208610 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.208621 | orchestrator | 2025-06-11 15:00:09.208631 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-11 15:00:09.208642 | orchestrator | Wednesday 11 June 2025 14:57:56 +0000 (0:00:00.541) 0:00:28.237 ******** 2025-06-11 15:00:09.208652 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:00:09.208663 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-11 15:00:09.208673 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-11 15:00:09.208684 | orchestrator | 2025-06-11 15:00:09.208695 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-11 15:00:09.208705 | orchestrator | Wednesday 11 June 2025 14:57:57 +0000 (0:00:00.963) 0:00:29.200 ******** 2025-06-11 15:00:09.208716 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.208727 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.208737 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.208748 | orchestrator | 2025-06-11 15:00:09.208758 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-11 15:00:09.208769 | orchestrator | Wednesday 11 June 2025 14:57:57 +0000 (0:00:00.278) 0:00:29.479 ******** 2025-06-11 15:00:09.208779 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-11 15:00:09.208790 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-11 15:00:09.208800 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-11 15:00:09.208811 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-11 15:00:09.208822 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-11 15:00:09.208839 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-11 15:00:09.208851 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-11 15:00:09.208862 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-11 15:00:09.208872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-11 15:00:09.208883 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-11 15:00:09.208893 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-11 15:00:09.208903 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-11 15:00:09.208919 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-11 15:00:09.208936 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-11 15:00:09.208947 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-11 15:00:09.208958 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:00:09.208968 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:00:09.208979 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:00:09.208989 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:00:09.208999 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:00:09.209010 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:00:09.209020 | orchestrator | 2025-06-11 15:00:09.209031 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-11 15:00:09.209041 | orchestrator | Wednesday 11 June 2025 14:58:06 +0000 (0:00:08.541) 0:00:38.020 ******** 2025-06-11 15:00:09.209052 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:00:09.209062 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:00:09.209073 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:00:09.209083 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:00:09.209093 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:00:09.209104 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:00:09.209114 | orchestrator | 2025-06-11 15:00:09.209125 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-11 15:00:09.209135 | orchestrator | Wednesday 11 June 2025 14:58:08 +0000 (0:00:02.655) 0:00:40.676 ******** 2025-06-11 15:00:09.209147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.209167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.209190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-11 15:00:09.209228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-11 15:00:09.209315 | orchestrator | 2025-06-11 15:00:09.209327 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.209337 | orchestrator | Wednesday 11 June 2025 14:58:11 +0000 (0:00:02.331) 0:00:43.007 ******** 2025-06-11 15:00:09.209348 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.209359 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.209369 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.209380 | orchestrator | 2025-06-11 15:00:09.209390 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-11 15:00:09.209401 | orchestrator | Wednesday 11 June 2025 14:58:11 +0000 (0:00:00.307) 0:00:43.315 ******** 2025-06-11 15:00:09.209412 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209422 | orchestrator | 2025-06-11 15:00:09.209433 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-11 15:00:09.209444 | orchestrator | Wednesday 11 June 2025 14:58:13 +0000 (0:00:02.282) 0:00:45.598 ******** 2025-06-11 15:00:09.209454 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209465 | orchestrator | 2025-06-11 15:00:09.209475 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-11 15:00:09.209486 | orchestrator | Wednesday 11 June 2025 14:58:16 +0000 (0:00:02.803) 0:00:48.401 ******** 2025-06-11 15:00:09.209496 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.209507 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.209518 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.209528 | orchestrator | 2025-06-11 15:00:09.209539 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-11 15:00:09.209550 | orchestrator | Wednesday 11 June 2025 14:58:17 +0000 (0:00:00.948) 0:00:49.349 ******** 2025-06-11 15:00:09.209560 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.209571 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.209582 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.209592 | orchestrator | 2025-06-11 15:00:09.209603 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-11 15:00:09.209613 | orchestrator | Wednesday 11 June 2025 14:58:17 +0000 (0:00:00.353) 0:00:49.703 ******** 2025-06-11 15:00:09.209624 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.209635 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.209645 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.209656 | orchestrator | 2025-06-11 15:00:09.209666 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-11 15:00:09.209677 | orchestrator | Wednesday 11 June 2025 14:58:18 +0000 (0:00:00.403) 0:00:50.107 ******** 2025-06-11 15:00:09.209687 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209698 | orchestrator | 2025-06-11 15:00:09.209709 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-11 15:00:09.209719 | orchestrator | Wednesday 11 June 2025 14:58:32 +0000 (0:00:14.228) 0:01:04.336 ******** 2025-06-11 15:00:09.209730 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209741 | orchestrator | 2025-06-11 15:00:09.209757 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-11 15:00:09.209768 | orchestrator | Wednesday 11 June 2025 14:58:42 +0000 (0:00:10.305) 0:01:14.641 ******** 2025-06-11 15:00:09.209778 | orchestrator | 2025-06-11 15:00:09.209789 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-11 15:00:09.209800 | orchestrator | Wednesday 11 June 2025 14:58:43 +0000 (0:00:00.194) 0:01:14.836 ******** 2025-06-11 15:00:09.209810 | orchestrator | 2025-06-11 15:00:09.209821 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-11 15:00:09.209832 | orchestrator | Wednesday 11 June 2025 14:58:43 +0000 (0:00:00.056) 0:01:14.892 ******** 2025-06-11 15:00:09.209842 | orchestrator | 2025-06-11 15:00:09.209853 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-11 15:00:09.209863 | orchestrator | Wednesday 11 June 2025 14:58:43 +0000 (0:00:00.061) 0:01:14.954 ******** 2025-06-11 15:00:09.209874 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209885 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:00:09.209895 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:00:09.209906 | orchestrator | 2025-06-11 15:00:09.209916 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-11 15:00:09.209927 | orchestrator | Wednesday 11 June 2025 14:59:01 +0000 (0:00:17.853) 0:01:32.807 ******** 2025-06-11 15:00:09.209938 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:00:09.209948 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.209959 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:00:09.209969 | orchestrator | 2025-06-11 15:00:09.209980 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-11 15:00:09.209990 | orchestrator | Wednesday 11 June 2025 14:59:10 +0000 (0:00:09.900) 0:01:42.707 ******** 2025-06-11 15:00:09.210001 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.210012 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:00:09.210070 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:00:09.210082 | orchestrator | 2025-06-11 15:00:09.210092 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.210103 | orchestrator | Wednesday 11 June 2025 14:59:18 +0000 (0:00:07.222) 0:01:49.930 ******** 2025-06-11 15:00:09.210114 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:00:09.210124 | orchestrator | 2025-06-11 15:00:09.210135 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-11 15:00:09.210146 | orchestrator | Wednesday 11 June 2025 14:59:18 +0000 (0:00:00.722) 0:01:50.652 ******** 2025-06-11 15:00:09.210156 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.210167 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:00:09.210178 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:00:09.210189 | orchestrator | 2025-06-11 15:00:09.210220 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-11 15:00:09.210231 | orchestrator | Wednesday 11 June 2025 14:59:19 +0000 (0:00:00.743) 0:01:51.395 ******** 2025-06-11 15:00:09.210242 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:00:09.210252 | orchestrator | 2025-06-11 15:00:09.210271 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-11 15:00:09.210282 | orchestrator | Wednesday 11 June 2025 14:59:21 +0000 (0:00:01.754) 0:01:53.150 ******** 2025-06-11 15:00:09.210293 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-11 15:00:09.210304 | orchestrator | 2025-06-11 15:00:09.210314 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-11 15:00:09.210325 | orchestrator | Wednesday 11 June 2025 14:59:32 +0000 (0:00:11.029) 0:02:04.179 ******** 2025-06-11 15:00:09.210335 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-11 15:00:09.210346 | orchestrator | 2025-06-11 15:00:09.210357 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-11 15:00:09.210367 | orchestrator | Wednesday 11 June 2025 14:59:54 +0000 (0:00:22.161) 0:02:26.341 ******** 2025-06-11 15:00:09.210385 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-11 15:00:09.210396 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-11 15:00:09.210407 | orchestrator | 2025-06-11 15:00:09.210417 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-11 15:00:09.210428 | orchestrator | Wednesday 11 June 2025 15:00:01 +0000 (0:00:06.619) 0:02:32.960 ******** 2025-06-11 15:00:09.210439 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.210449 | orchestrator | 2025-06-11 15:00:09.210460 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-11 15:00:09.210470 | orchestrator | Wednesday 11 June 2025 15:00:01 +0000 (0:00:00.297) 0:02:33.258 ******** 2025-06-11 15:00:09.210481 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.210491 | orchestrator | 2025-06-11 15:00:09.210502 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-11 15:00:09.210513 | orchestrator | Wednesday 11 June 2025 15:00:01 +0000 (0:00:00.119) 0:02:33.377 ******** 2025-06-11 15:00:09.210523 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.210534 | orchestrator | 2025-06-11 15:00:09.210545 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-11 15:00:09.210555 | orchestrator | Wednesday 11 June 2025 15:00:01 +0000 (0:00:00.125) 0:02:33.502 ******** 2025-06-11 15:00:09.210566 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.210576 | orchestrator | 2025-06-11 15:00:09.210587 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-11 15:00:09.210598 | orchestrator | Wednesday 11 June 2025 15:00:02 +0000 (0:00:00.307) 0:02:33.810 ******** 2025-06-11 15:00:09.210608 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:00:09.210619 | orchestrator | 2025-06-11 15:00:09.210630 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-11 15:00:09.210640 | orchestrator | Wednesday 11 June 2025 15:00:05 +0000 (0:00:03.750) 0:02:37.561 ******** 2025-06-11 15:00:09.210651 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:00:09.210662 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:00:09.210672 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:00:09.210683 | orchestrator | 2025-06-11 15:00:09.210693 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:00:09.210704 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-11 15:00:09.210716 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-11 15:00:09.210727 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-11 15:00:09.210737 | orchestrator | 2025-06-11 15:00:09.210748 | orchestrator | 2025-06-11 15:00:09.210759 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:00:09.210769 | orchestrator | Wednesday 11 June 2025 15:00:06 +0000 (0:00:00.588) 0:02:38.149 ******** 2025-06-11 15:00:09.210780 | orchestrator | =============================================================================== 2025-06-11 15:00:09.210791 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.16s 2025-06-11 15:00:09.210801 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.85s 2025-06-11 15:00:09.210812 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.23s 2025-06-11 15:00:09.210823 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.03s 2025-06-11 15:00:09.210833 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.31s 2025-06-11 15:00:09.210849 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.90s 2025-06-11 15:00:09.210867 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.54s 2025-06-11 15:00:09.210878 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.22s 2025-06-11 15:00:09.210888 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.62s 2025-06-11 15:00:09.210899 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.46s 2025-06-11 15:00:09.210909 | orchestrator | keystone : Creating default user role ----------------------------------- 3.75s 2025-06-11 15:00:09.210920 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.40s 2025-06-11 15:00:09.210930 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.39s 2025-06-11 15:00:09.210941 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.80s 2025-06-11 15:00:09.210955 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.66s 2025-06-11 15:00:09.210966 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.40s 2025-06-11 15:00:09.210977 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-06-11 15:00:09.210987 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2025-06-11 15:00:09.210998 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.08s 2025-06-11 15:00:09.211009 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.78s 2025-06-11 15:00:09.211019 | orchestrator | 2025-06-11 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:12.263533 | orchestrator | 2025-06-11 15:00:12 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:12.263913 | orchestrator | 2025-06-11 15:00:12 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:12.269245 | orchestrator | 2025-06-11 15:00:12 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:12.269915 | orchestrator | 2025-06-11 15:00:12 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:12.271553 | orchestrator | 2025-06-11 15:00:12 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:12.271628 | orchestrator | 2025-06-11 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:15.308074 | orchestrator | 2025-06-11 15:00:15 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:15.308162 | orchestrator | 2025-06-11 15:00:15 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:15.308177 | orchestrator | 2025-06-11 15:00:15 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:15.308474 | orchestrator | 2025-06-11 15:00:15 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:15.309217 | orchestrator | 2025-06-11 15:00:15 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:15.309547 | orchestrator | 2025-06-11 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:18.338791 | orchestrator | 2025-06-11 15:00:18 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:18.339025 | orchestrator | 2025-06-11 15:00:18 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:18.339992 | orchestrator | 2025-06-11 15:00:18 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:18.341024 | orchestrator | 2025-06-11 15:00:18 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:18.341531 | orchestrator | 2025-06-11 15:00:18 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:18.341584 | orchestrator | 2025-06-11 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:21.387603 | orchestrator | 2025-06-11 15:00:21 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:21.389084 | orchestrator | 2025-06-11 15:00:21 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:21.391467 | orchestrator | 2025-06-11 15:00:21 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:21.392443 | orchestrator | 2025-06-11 15:00:21 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:21.393731 | orchestrator | 2025-06-11 15:00:21 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:21.395119 | orchestrator | 2025-06-11 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:24.442284 | orchestrator | 2025-06-11 15:00:24 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:24.443118 | orchestrator | 2025-06-11 15:00:24 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:24.445255 | orchestrator | 2025-06-11 15:00:24 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:24.447476 | orchestrator | 2025-06-11 15:00:24 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:24.449162 | orchestrator | 2025-06-11 15:00:24 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:24.449763 | orchestrator | 2025-06-11 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:27.491824 | orchestrator | 2025-06-11 15:00:27 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:27.493256 | orchestrator | 2025-06-11 15:00:27 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:27.494181 | orchestrator | 2025-06-11 15:00:27 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:27.495989 | orchestrator | 2025-06-11 15:00:27 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:27.497168 | orchestrator | 2025-06-11 15:00:27 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:27.497245 | orchestrator | 2025-06-11 15:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:30.537144 | orchestrator | 2025-06-11 15:00:30 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:30.537470 | orchestrator | 2025-06-11 15:00:30 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:30.537938 | orchestrator | 2025-06-11 15:00:30 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:30.538666 | orchestrator | 2025-06-11 15:00:30 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:30.539398 | orchestrator | 2025-06-11 15:00:30 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:30.539432 | orchestrator | 2025-06-11 15:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:33.584970 | orchestrator | 2025-06-11 15:00:33 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:33.586633 | orchestrator | 2025-06-11 15:00:33 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:33.586891 | orchestrator | 2025-06-11 15:00:33 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:33.587439 | orchestrator | 2025-06-11 15:00:33 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:33.588067 | orchestrator | 2025-06-11 15:00:33 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:33.588090 | orchestrator | 2025-06-11 15:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:36.621805 | orchestrator | 2025-06-11 15:00:36 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:36.621929 | orchestrator | 2025-06-11 15:00:36 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:36.622311 | orchestrator | 2025-06-11 15:00:36 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:36.622852 | orchestrator | 2025-06-11 15:00:36 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:36.623416 | orchestrator | 2025-06-11 15:00:36 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:36.623449 | orchestrator | 2025-06-11 15:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:39.648101 | orchestrator | 2025-06-11 15:00:39 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:39.648245 | orchestrator | 2025-06-11 15:00:39 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:39.648609 | orchestrator | 2025-06-11 15:00:39 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:39.648966 | orchestrator | 2025-06-11 15:00:39 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:39.652412 | orchestrator | 2025-06-11 15:00:39 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:39.652491 | orchestrator | 2025-06-11 15:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:42.683132 | orchestrator | 2025-06-11 15:00:42 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:42.683908 | orchestrator | 2025-06-11 15:00:42 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:42.685138 | orchestrator | 2025-06-11 15:00:42 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:42.685947 | orchestrator | 2025-06-11 15:00:42 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:42.686881 | orchestrator | 2025-06-11 15:00:42 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:42.686986 | orchestrator | 2025-06-11 15:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:45.728255 | orchestrator | 2025-06-11 15:00:45 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:45.728777 | orchestrator | 2025-06-11 15:00:45 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:45.730121 | orchestrator | 2025-06-11 15:00:45 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:45.731491 | orchestrator | 2025-06-11 15:00:45 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:45.732835 | orchestrator | 2025-06-11 15:00:45 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:45.732864 | orchestrator | 2025-06-11 15:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:48.763945 | orchestrator | 2025-06-11 15:00:48 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:48.767770 | orchestrator | 2025-06-11 15:00:48 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:48.767839 | orchestrator | 2025-06-11 15:00:48 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:48.769968 | orchestrator | 2025-06-11 15:00:48 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:48.771850 | orchestrator | 2025-06-11 15:00:48 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:48.771900 | orchestrator | 2025-06-11 15:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:51.803993 | orchestrator | 2025-06-11 15:00:51 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:51.805075 | orchestrator | 2025-06-11 15:00:51 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:51.806525 | orchestrator | 2025-06-11 15:00:51 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:51.807018 | orchestrator | 2025-06-11 15:00:51 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:51.807597 | orchestrator | 2025-06-11 15:00:51 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:51.811379 | orchestrator | 2025-06-11 15:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:54.842384 | orchestrator | 2025-06-11 15:00:54 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:54.842476 | orchestrator | 2025-06-11 15:00:54 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:54.843879 | orchestrator | 2025-06-11 15:00:54 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:54.844313 | orchestrator | 2025-06-11 15:00:54 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:54.844871 | orchestrator | 2025-06-11 15:00:54 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:54.845273 | orchestrator | 2025-06-11 15:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:00:57.873393 | orchestrator | 2025-06-11 15:00:57 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:00:57.873521 | orchestrator | 2025-06-11 15:00:57 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:00:57.874478 | orchestrator | 2025-06-11 15:00:57 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:00:57.875097 | orchestrator | 2025-06-11 15:00:57 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:00:57.876075 | orchestrator | 2025-06-11 15:00:57 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:00:57.876105 | orchestrator | 2025-06-11 15:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:00.899040 | orchestrator | 2025-06-11 15:01:00 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:00.899138 | orchestrator | 2025-06-11 15:01:00 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:00.899779 | orchestrator | 2025-06-11 15:01:00 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:00.900255 | orchestrator | 2025-06-11 15:01:00 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:00.905002 | orchestrator | 2025-06-11 15:01:00 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:00.905050 | orchestrator | 2025-06-11 15:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:03.929235 | orchestrator | 2025-06-11 15:01:03 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:03.931374 | orchestrator | 2025-06-11 15:01:03 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:03.931881 | orchestrator | 2025-06-11 15:01:03 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:03.937869 | orchestrator | 2025-06-11 15:01:03 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:03.937914 | orchestrator | 2025-06-11 15:01:03 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:03.937927 | orchestrator | 2025-06-11 15:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:06.968548 | orchestrator | 2025-06-11 15:01:06 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:06.968842 | orchestrator | 2025-06-11 15:01:06 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:06.969451 | orchestrator | 2025-06-11 15:01:06 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:06.970153 | orchestrator | 2025-06-11 15:01:06 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:06.970917 | orchestrator | 2025-06-11 15:01:06 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:06.971015 | orchestrator | 2025-06-11 15:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:10.005293 | orchestrator | 2025-06-11 15:01:10 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:10.005405 | orchestrator | 2025-06-11 15:01:10 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:10.005955 | orchestrator | 2025-06-11 15:01:10 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:10.006599 | orchestrator | 2025-06-11 15:01:10 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:10.007247 | orchestrator | 2025-06-11 15:01:10 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:10.007270 | orchestrator | 2025-06-11 15:01:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:13.030416 | orchestrator | 2025-06-11 15:01:13 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:13.030505 | orchestrator | 2025-06-11 15:01:13 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:13.030953 | orchestrator | 2025-06-11 15:01:13 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:13.031445 | orchestrator | 2025-06-11 15:01:13 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:13.032028 | orchestrator | 2025-06-11 15:01:13 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:13.032049 | orchestrator | 2025-06-11 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:16.054605 | orchestrator | 2025-06-11 15:01:16 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:16.054821 | orchestrator | 2025-06-11 15:01:16 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:16.055390 | orchestrator | 2025-06-11 15:01:16 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:16.055887 | orchestrator | 2025-06-11 15:01:16 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:16.056615 | orchestrator | 2025-06-11 15:01:16 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:16.056644 | orchestrator | 2025-06-11 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:19.084561 | orchestrator | 2025-06-11 15:01:19 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:19.084811 | orchestrator | 2025-06-11 15:01:19 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:19.085365 | orchestrator | 2025-06-11 15:01:19 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:19.085970 | orchestrator | 2025-06-11 15:01:19 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:19.086531 | orchestrator | 2025-06-11 15:01:19 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:19.086629 | orchestrator | 2025-06-11 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:22.127564 | orchestrator | 2025-06-11 15:01:22 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:22.128024 | orchestrator | 2025-06-11 15:01:22 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:22.130378 | orchestrator | 2025-06-11 15:01:22 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:22.130406 | orchestrator | 2025-06-11 15:01:22 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:22.130418 | orchestrator | 2025-06-11 15:01:22 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:22.130430 | orchestrator | 2025-06-11 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:25.168535 | orchestrator | 2025-06-11 15:01:25 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:25.168597 | orchestrator | 2025-06-11 15:01:25 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:25.169077 | orchestrator | 2025-06-11 15:01:25 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:25.174986 | orchestrator | 2025-06-11 15:01:25 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:25.175374 | orchestrator | 2025-06-11 15:01:25 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:25.175404 | orchestrator | 2025-06-11 15:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:28.213563 | orchestrator | 2025-06-11 15:01:28 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:28.213659 | orchestrator | 2025-06-11 15:01:28 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:28.215618 | orchestrator | 2025-06-11 15:01:28 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:28.215646 | orchestrator | 2025-06-11 15:01:28 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:28.215657 | orchestrator | 2025-06-11 15:01:28 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state STARTED 2025-06-11 15:01:28.215669 | orchestrator | 2025-06-11 15:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:31.252079 | orchestrator | 2025-06-11 15:01:31 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:31.252325 | orchestrator | 2025-06-11 15:01:31 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:31.252907 | orchestrator | 2025-06-11 15:01:31 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:31.253651 | orchestrator | 2025-06-11 15:01:31 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:31.254258 | orchestrator | 2025-06-11 15:01:31 | INFO  | Task 1ef893b4-6f3a-49a4-b4a7-11c3952d45d8 is in state SUCCESS 2025-06-11 15:01:31.254445 | orchestrator | 2025-06-11 15:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:34.280394 | orchestrator | 2025-06-11 15:01:34 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:34.280458 | orchestrator | 2025-06-11 15:01:34 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:34.280596 | orchestrator | 2025-06-11 15:01:34 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:34.281224 | orchestrator | 2025-06-11 15:01:34 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:34.281249 | orchestrator | 2025-06-11 15:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:37.309582 | orchestrator | 2025-06-11 15:01:37 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:37.309896 | orchestrator | 2025-06-11 15:01:37 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:37.310616 | orchestrator | 2025-06-11 15:01:37 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:37.311202 | orchestrator | 2025-06-11 15:01:37 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:37.311213 | orchestrator | 2025-06-11 15:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:40.339579 | orchestrator | 2025-06-11 15:01:40 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:40.339710 | orchestrator | 2025-06-11 15:01:40 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:40.342366 | orchestrator | 2025-06-11 15:01:40 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:40.342411 | orchestrator | 2025-06-11 15:01:40 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:40.342432 | orchestrator | 2025-06-11 15:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:43.369901 | orchestrator | 2025-06-11 15:01:43 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:43.370158 | orchestrator | 2025-06-11 15:01:43 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:43.370684 | orchestrator | 2025-06-11 15:01:43 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:43.372851 | orchestrator | 2025-06-11 15:01:43 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:43.372880 | orchestrator | 2025-06-11 15:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:46.433324 | orchestrator | 2025-06-11 15:01:46 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:46.433818 | orchestrator | 2025-06-11 15:01:46 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:46.434588 | orchestrator | 2025-06-11 15:01:46 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:46.436597 | orchestrator | 2025-06-11 15:01:46 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:46.436621 | orchestrator | 2025-06-11 15:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:49.460428 | orchestrator | 2025-06-11 15:01:49 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:49.460593 | orchestrator | 2025-06-11 15:01:49 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:49.461116 | orchestrator | 2025-06-11 15:01:49 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:49.461869 | orchestrator | 2025-06-11 15:01:49 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:49.461904 | orchestrator | 2025-06-11 15:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:52.488134 | orchestrator | 2025-06-11 15:01:52 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:52.488561 | orchestrator | 2025-06-11 15:01:52 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:52.489127 | orchestrator | 2025-06-11 15:01:52 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:52.489662 | orchestrator | 2025-06-11 15:01:52 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:52.489677 | orchestrator | 2025-06-11 15:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:55.523491 | orchestrator | 2025-06-11 15:01:55 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:55.523652 | orchestrator | 2025-06-11 15:01:55 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:55.524749 | orchestrator | 2025-06-11 15:01:55 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:55.527308 | orchestrator | 2025-06-11 15:01:55 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:55.527340 | orchestrator | 2025-06-11 15:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:01:58.559354 | orchestrator | 2025-06-11 15:01:58 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:01:58.559839 | orchestrator | 2025-06-11 15:01:58 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:01:58.561143 | orchestrator | 2025-06-11 15:01:58 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:01:58.562383 | orchestrator | 2025-06-11 15:01:58 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:01:58.562454 | orchestrator | 2025-06-11 15:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:01.589291 | orchestrator | 2025-06-11 15:02:01 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:01.589425 | orchestrator | 2025-06-11 15:02:01 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:01.590074 | orchestrator | 2025-06-11 15:02:01 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:01.592540 | orchestrator | 2025-06-11 15:02:01 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:02:01.592587 | orchestrator | 2025-06-11 15:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:04.627883 | orchestrator | 2025-06-11 15:02:04 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:04.628327 | orchestrator | 2025-06-11 15:02:04 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:04.629007 | orchestrator | 2025-06-11 15:02:04 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:04.629747 | orchestrator | 2025-06-11 15:02:04 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:02:04.629862 | orchestrator | 2025-06-11 15:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:07.661916 | orchestrator | 2025-06-11 15:02:07 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:07.662392 | orchestrator | 2025-06-11 15:02:07 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state STARTED 2025-06-11 15:02:07.663073 | orchestrator | 2025-06-11 15:02:07 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:07.663580 | orchestrator | 2025-06-11 15:02:07 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:07.664305 | orchestrator | 2025-06-11 15:02:07 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:02:07.664347 | orchestrator | 2025-06-11 15:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:10.693813 | orchestrator | 2025-06-11 15:02:10 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:10.694390 | orchestrator | 2025-06-11 15:02:10 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state STARTED 2025-06-11 15:02:10.696576 | orchestrator | 2025-06-11 15:02:10 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:10.698436 | orchestrator | 2025-06-11 15:02:10 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:10.700486 | orchestrator | 2025-06-11 15:02:10 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:02:10.700530 | orchestrator | 2025-06-11 15:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:13.738999 | orchestrator | 2025-06-11 15:02:13 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:13.739203 | orchestrator | 2025-06-11 15:02:13 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state STARTED 2025-06-11 15:02:13.740069 | orchestrator | 2025-06-11 15:02:13 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:13.740831 | orchestrator | 2025-06-11 15:02:13 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:13.741287 | orchestrator | 2025-06-11 15:02:13 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state STARTED 2025-06-11 15:02:13.741311 | orchestrator | 2025-06-11 15:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:16.817453 | orchestrator | 2025-06-11 15:02:16 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:16.817988 | orchestrator | 2025-06-11 15:02:16 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state STARTED 2025-06-11 15:02:16.818885 | orchestrator | 2025-06-11 15:02:16 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:16.820592 | orchestrator | 2025-06-11 15:02:16 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:16.821623 | orchestrator | 2025-06-11 15:02:16 | INFO  | Task 2c825209-335c-445b-a2ff-3bf08d69b935 is in state SUCCESS 2025-06-11 15:02:16.823198 | orchestrator | 2025-06-11 15:02:16.823235 | orchestrator | 2025-06-11 15:02:16.823247 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-11 15:02:16.823259 | orchestrator | 2025-06-11 15:02:16.823270 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-11 15:02:16.823282 | orchestrator | Wednesday 11 June 2025 15:00:05 +0000 (0:00:00.243) 0:00:00.243 ******** 2025-06-11 15:02:16.823318 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823330 | orchestrator | 2025-06-11 15:02:16.823355 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-11 15:02:16.823366 | orchestrator | Wednesday 11 June 2025 15:00:07 +0000 (0:00:02.074) 0:00:02.318 ******** 2025-06-11 15:02:16.823377 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823388 | orchestrator | 2025-06-11 15:02:16.823398 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-11 15:02:16.823409 | orchestrator | Wednesday 11 June 2025 15:00:08 +0000 (0:00:00.957) 0:00:03.275 ******** 2025-06-11 15:02:16.823420 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823430 | orchestrator | 2025-06-11 15:02:16.823462 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-11 15:02:16.823474 | orchestrator | Wednesday 11 June 2025 15:00:09 +0000 (0:00:00.908) 0:00:04.184 ******** 2025-06-11 15:02:16.823486 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823496 | orchestrator | 2025-06-11 15:02:16.823508 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-11 15:02:16.823519 | orchestrator | Wednesday 11 June 2025 15:00:10 +0000 (0:00:01.107) 0:00:05.291 ******** 2025-06-11 15:02:16.823530 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823541 | orchestrator | 2025-06-11 15:02:16.823552 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-11 15:02:16.823562 | orchestrator | Wednesday 11 June 2025 15:00:11 +0000 (0:00:00.941) 0:00:06.232 ******** 2025-06-11 15:02:16.823573 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823584 | orchestrator | 2025-06-11 15:02:16.823595 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-11 15:02:16.823606 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.861) 0:00:07.094 ******** 2025-06-11 15:02:16.823616 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823627 | orchestrator | 2025-06-11 15:02:16.823638 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-11 15:02:16.823649 | orchestrator | Wednesday 11 June 2025 15:00:13 +0000 (0:00:01.230) 0:00:08.324 ******** 2025-06-11 15:02:16.823660 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823759 | orchestrator | 2025-06-11 15:02:16.823772 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-11 15:02:16.823786 | orchestrator | Wednesday 11 June 2025 15:00:14 +0000 (0:00:01.162) 0:00:09.487 ******** 2025-06-11 15:02:16.823798 | orchestrator | changed: [testbed-manager] 2025-06-11 15:02:16.823809 | orchestrator | 2025-06-11 15:02:16.823821 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-11 15:02:16.823834 | orchestrator | Wednesday 11 June 2025 15:01:05 +0000 (0:00:50.795) 0:01:00.283 ******** 2025-06-11 15:02:16.823846 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:02:16.823859 | orchestrator | 2025-06-11 15:02:16.823871 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-11 15:02:16.823883 | orchestrator | 2025-06-11 15:02:16.823894 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-11 15:02:16.823904 | orchestrator | Wednesday 11 June 2025 15:01:05 +0000 (0:00:00.144) 0:01:00.427 ******** 2025-06-11 15:02:16.823915 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.823926 | orchestrator | 2025-06-11 15:02:16.823937 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-11 15:02:16.823947 | orchestrator | 2025-06-11 15:02:16.823958 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-11 15:02:16.823969 | orchestrator | Wednesday 11 June 2025 15:01:07 +0000 (0:00:01.552) 0:01:01.980 ******** 2025-06-11 15:02:16.823979 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:02:16.823990 | orchestrator | 2025-06-11 15:02:16.824001 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-11 15:02:16.824012 | orchestrator | 2025-06-11 15:02:16.824023 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-11 15:02:16.824041 | orchestrator | Wednesday 11 June 2025 15:01:18 +0000 (0:00:11.232) 0:01:13.212 ******** 2025-06-11 15:02:16.824052 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:02:16.824063 | orchestrator | 2025-06-11 15:02:16.824074 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:02:16.824085 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-11 15:02:16.824097 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:02:16.824108 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:02:16.824119 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:02:16.824130 | orchestrator | 2025-06-11 15:02:16.824141 | orchestrator | 2025-06-11 15:02:16.824169 | orchestrator | 2025-06-11 15:02:16.824181 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:02:16.824192 | orchestrator | Wednesday 11 June 2025 15:01:29 +0000 (0:00:11.341) 0:01:24.554 ******** 2025-06-11 15:02:16.824203 | orchestrator | =============================================================================== 2025-06-11 15:02:16.824213 | orchestrator | Create admin user ------------------------------------------------------ 50.80s 2025-06-11 15:02:16.824224 | orchestrator | Restart ceph manager service ------------------------------------------- 24.13s 2025-06-11 15:02:16.824249 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.07s 2025-06-11 15:02:16.824260 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.23s 2025-06-11 15:02:16.824271 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.16s 2025-06-11 15:02:16.824282 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.11s 2025-06-11 15:02:16.824292 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.96s 2025-06-11 15:02:16.824309 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.94s 2025-06-11 15:02:16.824320 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2025-06-11 15:02:16.824331 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.86s 2025-06-11 15:02:16.824342 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-06-11 15:02:16.824353 | orchestrator | 2025-06-11 15:02:16.824363 | orchestrator | 2025-06-11 15:02:16.824374 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:02:16.824385 | orchestrator | 2025-06-11 15:02:16.824395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:02:16.824406 | orchestrator | Wednesday 11 June 2025 15:00:11 +0000 (0:00:00.265) 0:00:00.265 ******** 2025-06-11 15:02:16.824417 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:02:16.824428 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:02:16.824438 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:02:16.824449 | orchestrator | 2025-06-11 15:02:16.824460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:02:16.824470 | orchestrator | Wednesday 11 June 2025 15:00:11 +0000 (0:00:00.265) 0:00:00.531 ******** 2025-06-11 15:02:16.824481 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-11 15:02:16.824493 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-11 15:02:16.824504 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-11 15:02:16.824514 | orchestrator | 2025-06-11 15:02:16.824525 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-11 15:02:16.824536 | orchestrator | 2025-06-11 15:02:16.824546 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-11 15:02:16.824564 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.502) 0:00:01.034 ******** 2025-06-11 15:02:16.824575 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:02:16.824587 | orchestrator | 2025-06-11 15:02:16.824597 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-11 15:02:16.824608 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.756) 0:00:01.791 ******** 2025-06-11 15:02:16.824619 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-11 15:02:16.824630 | orchestrator | 2025-06-11 15:02:16.824640 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-11 15:02:16.824651 | orchestrator | Wednesday 11 June 2025 15:00:16 +0000 (0:00:03.566) 0:00:05.358 ******** 2025-06-11 15:02:16.824661 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-11 15:02:16.824672 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-11 15:02:16.824683 | orchestrator | 2025-06-11 15:02:16.824694 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-11 15:02:16.824705 | orchestrator | Wednesday 11 June 2025 15:00:23 +0000 (0:00:06.843) 0:00:12.201 ******** 2025-06-11 15:02:16.824715 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-11 15:02:16.824726 | orchestrator | 2025-06-11 15:02:16.824737 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-11 15:02:16.824747 | orchestrator | Wednesday 11 June 2025 15:00:26 +0000 (0:00:03.462) 0:00:15.664 ******** 2025-06-11 15:02:16.824758 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:02:16.824769 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-11 15:02:16.824779 | orchestrator | 2025-06-11 15:02:16.824790 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-11 15:02:16.824801 | orchestrator | Wednesday 11 June 2025 15:00:30 +0000 (0:00:03.976) 0:00:19.641 ******** 2025-06-11 15:02:16.824812 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:02:16.824822 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-11 15:02:16.824833 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-11 15:02:16.824844 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-11 15:02:16.824855 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-11 15:02:16.824866 | orchestrator | 2025-06-11 15:02:16.824877 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-11 15:02:16.824887 | orchestrator | Wednesday 11 June 2025 15:00:46 +0000 (0:00:15.463) 0:00:35.104 ******** 2025-06-11 15:02:16.824898 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-11 15:02:16.824909 | orchestrator | 2025-06-11 15:02:16.824919 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-11 15:02:16.824930 | orchestrator | Wednesday 11 June 2025 15:00:50 +0000 (0:00:03.968) 0:00:39.073 ******** 2025-06-11 15:02:16.824957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.824981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.824993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.825005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825094 | orchestrator | 2025-06-11 15:02:16.825105 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-11 15:02:16.825116 | orchestrator | Wednesday 11 June 2025 15:00:51 +0000 (0:00:01.791) 0:00:40.865 ******** 2025-06-11 15:02:16.825127 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-11 15:02:16.825138 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-11 15:02:16.825148 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-11 15:02:16.825189 | orchestrator | 2025-06-11 15:02:16.825200 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-11 15:02:16.825219 | orchestrator | Wednesday 11 June 2025 15:00:53 +0000 (0:00:01.249) 0:00:42.115 ******** 2025-06-11 15:02:16.825238 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.825258 | orchestrator | 2025-06-11 15:02:16.825279 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-11 15:02:16.825298 | orchestrator | Wednesday 11 June 2025 15:00:53 +0000 (0:00:00.428) 0:00:42.544 ******** 2025-06-11 15:02:16.825309 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.825320 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.825331 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.825341 | orchestrator | 2025-06-11 15:02:16.825352 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-11 15:02:16.825363 | orchestrator | Wednesday 11 June 2025 15:00:54 +0000 (0:00:01.181) 0:00:43.726 ******** 2025-06-11 15:02:16.825373 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:02:16.825384 | orchestrator | 2025-06-11 15:02:16.825395 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-11 15:02:16.825405 | orchestrator | Wednesday 11 June 2025 15:00:56 +0000 (0:00:01.478) 0:00:45.204 ******** 2025-06-11 15:02:16.825425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.825451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.825463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.825475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.825571 | orchestrator | 2025-06-11 15:02:16.825589 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-11 15:02:16.825608 | orchestrator | Wednesday 11 June 2025 15:01:00 +0000 (0:00:03.915) 0:00:49.119 ******** 2025-06-11 15:02:16.825628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.825640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825671 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.825701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.825714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825736 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.825748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.825759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825794 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.825805 | orchestrator | 2025-06-11 15:02:16.825816 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-11 15:02:16.825826 | orchestrator | Wednesday 11 June 2025 15:01:01 +0000 (0:00:01.627) 0:00:50.747 ******** 2025-06-11 15:02:16.825842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.825854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.825876 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.825888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.826252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826294 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.826305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.826316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826347 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.826358 | orchestrator | 2025-06-11 15:02:16.826369 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-11 15:02:16.826380 | orchestrator | Wednesday 11 June 2025 15:01:03 +0000 (0:00:02.020) 0:00:52.767 ******** 2025-06-11 15:02:16.826392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826527 | orchestrator | 2025-06-11 15:02:16.826538 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-11 15:02:16.826549 | orchestrator | Wednesday 11 June 2025 15:01:07 +0000 (0:00:03.819) 0:00:56.587 ******** 2025-06-11 15:02:16.826560 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:02:16.826571 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.826581 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:02:16.826592 | orchestrator | 2025-06-11 15:02:16.826603 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-11 15:02:16.826614 | orchestrator | Wednesday 11 June 2025 15:01:10 +0000 (0:00:02.625) 0:00:59.212 ******** 2025-06-11 15:02:16.826624 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:02:16.826635 | orchestrator | 2025-06-11 15:02:16.826646 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-11 15:02:16.826662 | orchestrator | Wednesday 11 June 2025 15:01:12 +0000 (0:00:01.901) 0:01:01.113 ******** 2025-06-11 15:02:16.826673 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.826684 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.826695 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.826706 | orchestrator | 2025-06-11 15:02:16.826716 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-11 15:02:16.826727 | orchestrator | Wednesday 11 June 2025 15:01:12 +0000 (0:00:00.775) 0:01:01.889 ******** 2025-06-11 15:02:16.826738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.826785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.826874 | orchestrator | 2025-06-11 15:02:16.826887 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-11 15:02:16.826899 | orchestrator | Wednesday 11 June 2025 15:01:23 +0000 (0:00:10.493) 0:01:12.382 ******** 2025-06-11 15:02:16.826912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.826931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.826957 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.826970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.826995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.827009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.827022 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.827035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-11 15:02:16.827057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.827071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:02:16.827084 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.827096 | orchestrator | 2025-06-11 15:02:16.827108 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-11 15:02:16.827120 | orchestrator | Wednesday 11 June 2025 15:01:25 +0000 (0:00:02.101) 0:01:14.483 ******** 2025-06-11 15:02:16.827140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.827181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.827203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-11 15:02:16.827215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:02:16.827302 | orchestrator | 2025-06-11 15:02:16.827313 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-11 15:02:16.827323 | orchestrator | Wednesday 11 June 2025 15:01:29 +0000 (0:00:04.139) 0:01:18.622 ******** 2025-06-11 15:02:16.827334 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:02:16.827345 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:02:16.827355 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:02:16.827366 | orchestrator | 2025-06-11 15:02:16.827377 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-11 15:02:16.827388 | orchestrator | Wednesday 11 June 2025 15:01:30 +0000 (0:00:01.016) 0:01:19.639 ******** 2025-06-11 15:02:16.827399 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827410 | orchestrator | 2025-06-11 15:02:16.827420 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-11 15:02:16.827431 | orchestrator | Wednesday 11 June 2025 15:01:33 +0000 (0:00:02.563) 0:01:22.202 ******** 2025-06-11 15:02:16.827442 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827452 | orchestrator | 2025-06-11 15:02:16.827463 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-11 15:02:16.827474 | orchestrator | Wednesday 11 June 2025 15:01:35 +0000 (0:00:02.075) 0:01:24.277 ******** 2025-06-11 15:02:16.827485 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827495 | orchestrator | 2025-06-11 15:02:16.827506 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-11 15:02:16.827516 | orchestrator | Wednesday 11 June 2025 15:01:46 +0000 (0:00:10.886) 0:01:35.164 ******** 2025-06-11 15:02:16.827527 | orchestrator | 2025-06-11 15:02:16.827538 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-11 15:02:16.827548 | orchestrator | Wednesday 11 June 2025 15:01:46 +0000 (0:00:00.053) 0:01:35.218 ******** 2025-06-11 15:02:16.827559 | orchestrator | 2025-06-11 15:02:16.827569 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-11 15:02:16.827580 | orchestrator | Wednesday 11 June 2025 15:01:46 +0000 (0:00:00.053) 0:01:35.271 ******** 2025-06-11 15:02:16.827590 | orchestrator | 2025-06-11 15:02:16.827601 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-11 15:02:16.827612 | orchestrator | Wednesday 11 June 2025 15:01:46 +0000 (0:00:00.052) 0:01:35.324 ******** 2025-06-11 15:02:16.827622 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827633 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:02:16.827643 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:02:16.827654 | orchestrator | 2025-06-11 15:02:16.827665 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-11 15:02:16.827675 | orchestrator | Wednesday 11 June 2025 15:01:52 +0000 (0:00:06.670) 0:01:41.995 ******** 2025-06-11 15:02:16.827687 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827697 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:02:16.827708 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:02:16.827718 | orchestrator | 2025-06-11 15:02:16.827735 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-11 15:02:16.827746 | orchestrator | Wednesday 11 June 2025 15:02:03 +0000 (0:00:10.643) 0:01:52.638 ******** 2025-06-11 15:02:16.827757 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:02:16.827767 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:02:16.827778 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:02:16.827789 | orchestrator | 2025-06-11 15:02:16.827800 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:02:16.827811 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-11 15:02:16.827828 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:02:16.827840 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:02:16.827851 | orchestrator | 2025-06-11 15:02:16.827862 | orchestrator | 2025-06-11 15:02:16.827877 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:02:16.827888 | orchestrator | Wednesday 11 June 2025 15:02:15 +0000 (0:00:11.441) 0:02:04.079 ******** 2025-06-11 15:02:16.827899 | orchestrator | =============================================================================== 2025-06-11 15:02:16.827910 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.46s 2025-06-11 15:02:16.827921 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.44s 2025-06-11 15:02:16.827932 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.89s 2025-06-11 15:02:16.827943 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.64s 2025-06-11 15:02:16.827954 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.49s 2025-06-11 15:02:16.827965 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.84s 2025-06-11 15:02:16.827975 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.67s 2025-06-11 15:02:16.827986 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.14s 2025-06-11 15:02:16.827997 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.98s 2025-06-11 15:02:16.828007 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.97s 2025-06-11 15:02:16.828018 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.92s 2025-06-11 15:02:16.828029 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.82s 2025-06-11 15:02:16.828039 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2025-06-11 15:02:16.828050 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.46s 2025-06-11 15:02:16.828060 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.63s 2025-06-11 15:02:16.828071 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.56s 2025-06-11 15:02:16.828082 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.10s 2025-06-11 15:02:16.828093 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.08s 2025-06-11 15:02:16.828104 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.02s 2025-06-11 15:02:16.828114 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.90s 2025-06-11 15:02:16.828125 | orchestrator | 2025-06-11 15:02:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:19.863963 | orchestrator | 2025-06-11 15:02:19 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:19.864377 | orchestrator | 2025-06-11 15:02:19 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state STARTED 2025-06-11 15:02:19.864808 | orchestrator | 2025-06-11 15:02:19 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:19.871203 | orchestrator | 2025-06-11 15:02:19 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:19.871724 | orchestrator | 2025-06-11 15:02:19 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:19.871887 | orchestrator | 2025-06-11 15:02:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:22.899934 | orchestrator | 2025-06-11 15:02:22 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:22.900030 | orchestrator | 2025-06-11 15:02:22 | INFO  | Task 7160791d-dec5-4449-8d60-3ea44758dc51 is in state SUCCESS 2025-06-11 15:02:22.900970 | orchestrator | 2025-06-11 15:02:22 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:22.901657 | orchestrator | 2025-06-11 15:02:22 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:22.902297 | orchestrator | 2025-06-11 15:02:22 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:22.902327 | orchestrator | 2025-06-11 15:02:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:25.938093 | orchestrator | 2025-06-11 15:02:25 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:25.938200 | orchestrator | 2025-06-11 15:02:25 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:25.938367 | orchestrator | 2025-06-11 15:02:25 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:25.939000 | orchestrator | 2025-06-11 15:02:25 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:25.939016 | orchestrator | 2025-06-11 15:02:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:28.964595 | orchestrator | 2025-06-11 15:02:28 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:28.964813 | orchestrator | 2025-06-11 15:02:28 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:28.965374 | orchestrator | 2025-06-11 15:02:28 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:28.965999 | orchestrator | 2025-06-11 15:02:28 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:28.966060 | orchestrator | 2025-06-11 15:02:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:31.989011 | orchestrator | 2025-06-11 15:02:31 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:31.989202 | orchestrator | 2025-06-11 15:02:31 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:31.989664 | orchestrator | 2025-06-11 15:02:31 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:31.990197 | orchestrator | 2025-06-11 15:02:31 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:31.990215 | orchestrator | 2025-06-11 15:02:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:35.029063 | orchestrator | 2025-06-11 15:02:35 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:35.030243 | orchestrator | 2025-06-11 15:02:35 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:35.030279 | orchestrator | 2025-06-11 15:02:35 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:35.030779 | orchestrator | 2025-06-11 15:02:35 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:35.030932 | orchestrator | 2025-06-11 15:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:38.062561 | orchestrator | 2025-06-11 15:02:38 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:38.065089 | orchestrator | 2025-06-11 15:02:38 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:38.065122 | orchestrator | 2025-06-11 15:02:38 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:38.065919 | orchestrator | 2025-06-11 15:02:38 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:38.065942 | orchestrator | 2025-06-11 15:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:41.110348 | orchestrator | 2025-06-11 15:02:41 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:41.110566 | orchestrator | 2025-06-11 15:02:41 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:41.110584 | orchestrator | 2025-06-11 15:02:41 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:41.110611 | orchestrator | 2025-06-11 15:02:41 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:41.110720 | orchestrator | 2025-06-11 15:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:44.146649 | orchestrator | 2025-06-11 15:02:44 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:44.146897 | orchestrator | 2025-06-11 15:02:44 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:44.147422 | orchestrator | 2025-06-11 15:02:44 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:44.148126 | orchestrator | 2025-06-11 15:02:44 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:44.148198 | orchestrator | 2025-06-11 15:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:47.181024 | orchestrator | 2025-06-11 15:02:47 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:47.183064 | orchestrator | 2025-06-11 15:02:47 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:47.184294 | orchestrator | 2025-06-11 15:02:47 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:47.186169 | orchestrator | 2025-06-11 15:02:47 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:47.186210 | orchestrator | 2025-06-11 15:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:50.218216 | orchestrator | 2025-06-11 15:02:50 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:50.218435 | orchestrator | 2025-06-11 15:02:50 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:50.219928 | orchestrator | 2025-06-11 15:02:50 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:50.219958 | orchestrator | 2025-06-11 15:02:50 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:50.219971 | orchestrator | 2025-06-11 15:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:53.253838 | orchestrator | 2025-06-11 15:02:53 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:53.255490 | orchestrator | 2025-06-11 15:02:53 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:53.258656 | orchestrator | 2025-06-11 15:02:53 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:53.260099 | orchestrator | 2025-06-11 15:02:53 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:53.260131 | orchestrator | 2025-06-11 15:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:56.299372 | orchestrator | 2025-06-11 15:02:56 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:56.299481 | orchestrator | 2025-06-11 15:02:56 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:56.300048 | orchestrator | 2025-06-11 15:02:56 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:56.300073 | orchestrator | 2025-06-11 15:02:56 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:56.300084 | orchestrator | 2025-06-11 15:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:02:59.331297 | orchestrator | 2025-06-11 15:02:59 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:02:59.331846 | orchestrator | 2025-06-11 15:02:59 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:02:59.333404 | orchestrator | 2025-06-11 15:02:59 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:02:59.335603 | orchestrator | 2025-06-11 15:02:59 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:02:59.336791 | orchestrator | 2025-06-11 15:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:02.378188 | orchestrator | 2025-06-11 15:03:02 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:02.379989 | orchestrator | 2025-06-11 15:03:02 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:02.382440 | orchestrator | 2025-06-11 15:03:02 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:02.384773 | orchestrator | 2025-06-11 15:03:02 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:02.384785 | orchestrator | 2025-06-11 15:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:05.427193 | orchestrator | 2025-06-11 15:03:05 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:05.428580 | orchestrator | 2025-06-11 15:03:05 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:05.430058 | orchestrator | 2025-06-11 15:03:05 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:05.433023 | orchestrator | 2025-06-11 15:03:05 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:05.433061 | orchestrator | 2025-06-11 15:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:08.464739 | orchestrator | 2025-06-11 15:03:08 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:08.466355 | orchestrator | 2025-06-11 15:03:08 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:08.468210 | orchestrator | 2025-06-11 15:03:08 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:08.470603 | orchestrator | 2025-06-11 15:03:08 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:08.470634 | orchestrator | 2025-06-11 15:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:11.497117 | orchestrator | 2025-06-11 15:03:11 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:11.498424 | orchestrator | 2025-06-11 15:03:11 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:11.499302 | orchestrator | 2025-06-11 15:03:11 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:11.500187 | orchestrator | 2025-06-11 15:03:11 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:11.500205 | orchestrator | 2025-06-11 15:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:14.545325 | orchestrator | 2025-06-11 15:03:14 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:14.545428 | orchestrator | 2025-06-11 15:03:14 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:14.545443 | orchestrator | 2025-06-11 15:03:14 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:14.546342 | orchestrator | 2025-06-11 15:03:14 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:14.546371 | orchestrator | 2025-06-11 15:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:17.585757 | orchestrator | 2025-06-11 15:03:17 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:17.586943 | orchestrator | 2025-06-11 15:03:17 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:17.588531 | orchestrator | 2025-06-11 15:03:17 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:17.590265 | orchestrator | 2025-06-11 15:03:17 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:17.590290 | orchestrator | 2025-06-11 15:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:20.638645 | orchestrator | 2025-06-11 15:03:20 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:20.640268 | orchestrator | 2025-06-11 15:03:20 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:20.642294 | orchestrator | 2025-06-11 15:03:20 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:20.643691 | orchestrator | 2025-06-11 15:03:20 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:20.643723 | orchestrator | 2025-06-11 15:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:23.680710 | orchestrator | 2025-06-11 15:03:23 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:23.684505 | orchestrator | 2025-06-11 15:03:23 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:23.685007 | orchestrator | 2025-06-11 15:03:23 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:23.686518 | orchestrator | 2025-06-11 15:03:23 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:23.686652 | orchestrator | 2025-06-11 15:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:26.731532 | orchestrator | 2025-06-11 15:03:26 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:26.732592 | orchestrator | 2025-06-11 15:03:26 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:26.733779 | orchestrator | 2025-06-11 15:03:26 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:26.735895 | orchestrator | 2025-06-11 15:03:26 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state STARTED 2025-06-11 15:03:26.735978 | orchestrator | 2025-06-11 15:03:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:29.774931 | orchestrator | 2025-06-11 15:03:29 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:29.775643 | orchestrator | 2025-06-11 15:03:29 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:29.776439 | orchestrator | 2025-06-11 15:03:29 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:29.778511 | orchestrator | 2025-06-11 15:03:29 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:29.785387 | orchestrator | 2025-06-11 15:03:29 | INFO  | Task 3f37f89d-8bcf-4982-8c83-38dde8643459 is in state SUCCESS 2025-06-11 15:03:29.787456 | orchestrator | 2025-06-11 15:03:29.787482 | orchestrator | None 2025-06-11 15:03:29.787491 | orchestrator | 2025-06-11 15:03:29.787499 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:03:29.787508 | orchestrator | 2025-06-11 15:03:29.787515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:03:29.787524 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.340) 0:00:00.340 ******** 2025-06-11 15:03:29.787529 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:03:29.787535 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:03:29.787540 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:03:29.787544 | orchestrator | 2025-06-11 15:03:29.787560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:03:29.787565 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.312) 0:00:00.653 ******** 2025-06-11 15:03:29.787570 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-11 15:03:29.787574 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-11 15:03:29.787579 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-11 15:03:29.787583 | orchestrator | 2025-06-11 15:03:29.787588 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-11 15:03:29.787592 | orchestrator | 2025-06-11 15:03:29.787596 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-11 15:03:29.787600 | orchestrator | Wednesday 11 June 2025 15:00:13 +0000 (0:00:00.551) 0:00:01.205 ******** 2025-06-11 15:03:29.787605 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:03:29.787610 | orchestrator | 2025-06-11 15:03:29.787614 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-11 15:03:29.787618 | orchestrator | Wednesday 11 June 2025 15:00:13 +0000 (0:00:00.519) 0:00:01.724 ******** 2025-06-11 15:03:29.787623 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-11 15:03:29.787627 | orchestrator | 2025-06-11 15:03:29.787631 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-11 15:03:29.787635 | orchestrator | Wednesday 11 June 2025 15:00:17 +0000 (0:00:03.706) 0:00:05.431 ******** 2025-06-11 15:03:29.787639 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-11 15:03:29.787644 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-11 15:03:29.787648 | orchestrator | 2025-06-11 15:03:29.787653 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-11 15:03:29.787657 | orchestrator | Wednesday 11 June 2025 15:00:23 +0000 (0:00:06.610) 0:00:12.041 ******** 2025-06-11 15:03:29.787661 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:03:29.787665 | orchestrator | 2025-06-11 15:03:29.787669 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-11 15:03:29.787674 | orchestrator | Wednesday 11 June 2025 15:00:27 +0000 (0:00:03.281) 0:00:15.323 ******** 2025-06-11 15:03:29.787695 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:03:29.787699 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-11 15:03:29.787703 | orchestrator | 2025-06-11 15:03:29.787743 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-11 15:03:29.787748 | orchestrator | Wednesday 11 June 2025 15:00:30 +0000 (0:00:03.606) 0:00:18.929 ******** 2025-06-11 15:03:29.787752 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:03:29.787756 | orchestrator | 2025-06-11 15:03:29.787760 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-11 15:03:29.787764 | orchestrator | Wednesday 11 June 2025 15:00:34 +0000 (0:00:03.391) 0:00:22.320 ******** 2025-06-11 15:03:29.787768 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-11 15:03:29.787773 | orchestrator | 2025-06-11 15:03:29.787777 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-11 15:03:29.787782 | orchestrator | Wednesday 11 June 2025 15:00:38 +0000 (0:00:04.567) 0:00:26.888 ******** 2025-06-11 15:03:29.787788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.787809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.787817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.787855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.787977 | orchestrator | 2025-06-11 15:03:29.787981 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-11 15:03:29.787985 | orchestrator | Wednesday 11 June 2025 15:00:42 +0000 (0:00:03.591) 0:00:30.479 ******** 2025-06-11 15:03:29.787990 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.787994 | orchestrator | 2025-06-11 15:03:29.787998 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-11 15:03:29.788002 | orchestrator | Wednesday 11 June 2025 15:00:42 +0000 (0:00:00.111) 0:00:30.590 ******** 2025-06-11 15:03:29.788006 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.788010 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.788014 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.788018 | orchestrator | 2025-06-11 15:03:29.788022 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-11 15:03:29.788026 | orchestrator | Wednesday 11 June 2025 15:00:42 +0000 (0:00:00.231) 0:00:30.822 ******** 2025-06-11 15:03:29.788031 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:03:29.788035 | orchestrator | 2025-06-11 15:03:29.788039 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-11 15:03:29.788043 | orchestrator | Wednesday 11 June 2025 15:00:43 +0000 (0:00:00.565) 0:00:31.388 ******** 2025-06-11 15:03:29.788047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788262 | orchestrator | 2025-06-11 15:03:29.788267 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-11 15:03:29.788272 | orchestrator | Wednesday 11 June 2025 15:00:49 +0000 (0:00:05.829) 0:00:37.218 ******** 2025-06-11 15:03:29.788277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788367 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.788375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788544 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.788549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788587 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.788629 | orchestrator | 2025-06-11 15:03:29.788634 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-11 15:03:29.788638 | orchestrator | Wednesday 11 June 2025 15:00:50 +0000 (0:00:01.476) 0:00:38.694 ******** 2025-06-11 15:03:29.788643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788706 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.788713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788794 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.788801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.788809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.788821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.788850 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.788854 | orchestrator | 2025-06-11 15:03:29.788859 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-11 15:03:29.788880 | orchestrator | Wednesday 11 June 2025 15:00:53 +0000 (0:00:02.627) 0:00:41.321 ******** 2025-06-11 15:03:29.788885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.788913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.788992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789027 | orchestrator | 2025-06-11 15:03:29.789032 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-11 15:03:29.789036 | orchestrator | Wednesday 11 June 2025 15:00:59 +0000 (0:00:06.672) 0:00:47.994 ******** 2025-06-11 15:03:29.789040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.789048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.789052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.789563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789666 | orchestrator | 2025-06-11 15:03:29.789671 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-11 15:03:29.789676 | orchestrator | Wednesday 11 June 2025 15:01:21 +0000 (0:00:22.175) 0:01:10.169 ******** 2025-06-11 15:03:29.789680 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-11 15:03:29.789684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-11 15:03:29.789688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-11 15:03:29.789692 | orchestrator | 2025-06-11 15:03:29.789696 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-11 15:03:29.789700 | orchestrator | Wednesday 11 June 2025 15:01:28 +0000 (0:00:06.841) 0:01:17.010 ******** 2025-06-11 15:03:29.789708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-11 15:03:29.789711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-11 15:03:29.789715 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-11 15:03:29.789719 | orchestrator | 2025-06-11 15:03:29.789723 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-11 15:03:29.789727 | orchestrator | Wednesday 11 June 2025 15:01:33 +0000 (0:00:05.173) 0:01:22.184 ******** 2025-06-11 15:03:29.789732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789835 | orchestrator | 2025-06-11 15:03:29.789839 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-11 15:03:29.789843 | orchestrator | Wednesday 11 June 2025 15:01:37 +0000 (0:00:03.540) 0:01:25.725 ******** 2025-06-11 15:03:29.789863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.789881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.789943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.789964 | orchestrator | 2025-06-11 15:03:29.789968 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-11 15:03:29.789972 | orchestrator | Wednesday 11 June 2025 15:01:40 +0000 (0:00:03.172) 0:01:28.898 ******** 2025-06-11 15:03:29.789976 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.789981 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.789985 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.789989 | orchestrator | 2025-06-11 15:03:29.789992 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-11 15:03:29.789996 | orchestrator | Wednesday 11 June 2025 15:01:41 +0000 (0:00:00.815) 0:01:29.713 ******** 2025-06-11 15:03:29.790000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.790005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.790009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.790078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.790091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790151 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.790156 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.790160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-11 15:03:29.790165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-11 15:03:29.790170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-11 15:03:29.790198 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.790202 | orchestrator | 2025-06-11 15:03:29.790207 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-11 15:03:29.790211 | orchestrator | Wednesday 11 June 2025 15:01:43 +0000 (0:00:01.708) 0:01:31.422 ******** 2025-06-11 15:03:29.790216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.790221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.790226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-11 15:03:29.790234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-11 15:03:29.790316 | orchestrator | 2025-06-11 15:03:29.790320 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-11 15:03:29.790324 | orchestrator | Wednesday 11 June 2025 15:01:48 +0000 (0:00:05.123) 0:01:36.546 ******** 2025-06-11 15:03:29.790328 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:29.790332 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:29.790339 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:29.790343 | orchestrator | 2025-06-11 15:03:29.790347 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-11 15:03:29.790351 | orchestrator | Wednesday 11 June 2025 15:01:48 +0000 (0:00:00.434) 0:01:36.980 ******** 2025-06-11 15:03:29.790355 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-11 15:03:29.790359 | orchestrator | 2025-06-11 15:03:29.790363 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-11 15:03:29.790366 | orchestrator | Wednesday 11 June 2025 15:01:51 +0000 (0:00:02.451) 0:01:39.431 ******** 2025-06-11 15:03:29.790370 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 15:03:29.790374 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-11 15:03:29.790378 | orchestrator | 2025-06-11 15:03:29.790382 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-11 15:03:29.790386 | orchestrator | Wednesday 11 June 2025 15:01:53 +0000 (0:00:02.284) 0:01:41.715 ******** 2025-06-11 15:03:29.790390 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790394 | orchestrator | 2025-06-11 15:03:29.790398 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-11 15:03:29.790402 | orchestrator | Wednesday 11 June 2025 15:02:09 +0000 (0:00:15.808) 0:01:57.523 ******** 2025-06-11 15:03:29.790406 | orchestrator | 2025-06-11 15:03:29.790409 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-11 15:03:29.790413 | orchestrator | Wednesday 11 June 2025 15:02:09 +0000 (0:00:00.066) 0:01:57.590 ******** 2025-06-11 15:03:29.790417 | orchestrator | 2025-06-11 15:03:29.790421 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-11 15:03:29.790425 | orchestrator | Wednesday 11 June 2025 15:02:09 +0000 (0:00:00.057) 0:01:57.648 ******** 2025-06-11 15:03:29.790429 | orchestrator | 2025-06-11 15:03:29.790433 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-11 15:03:29.790436 | orchestrator | Wednesday 11 June 2025 15:02:09 +0000 (0:00:00.053) 0:01:57.702 ******** 2025-06-11 15:03:29.790445 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790449 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790453 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790456 | orchestrator | 2025-06-11 15:03:29.790460 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-11 15:03:29.790464 | orchestrator | Wednesday 11 June 2025 15:02:26 +0000 (0:00:16.980) 0:02:14.682 ******** 2025-06-11 15:03:29.790468 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790472 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790476 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790480 | orchestrator | 2025-06-11 15:03:29.790483 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-11 15:03:29.790487 | orchestrator | Wednesday 11 June 2025 15:02:39 +0000 (0:00:13.077) 0:02:27.760 ******** 2025-06-11 15:03:29.790491 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790495 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790499 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790503 | orchestrator | 2025-06-11 15:03:29.790507 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-11 15:03:29.790511 | orchestrator | Wednesday 11 June 2025 15:02:50 +0000 (0:00:10.927) 0:02:38.688 ******** 2025-06-11 15:03:29.790515 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790518 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790522 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790526 | orchestrator | 2025-06-11 15:03:29.790530 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-11 15:03:29.790534 | orchestrator | Wednesday 11 June 2025 15:02:56 +0000 (0:00:05.622) 0:02:44.310 ******** 2025-06-11 15:03:29.790538 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790542 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790545 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790549 | orchestrator | 2025-06-11 15:03:29.790553 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-11 15:03:29.790557 | orchestrator | Wednesday 11 June 2025 15:03:07 +0000 (0:00:11.133) 0:02:55.444 ******** 2025-06-11 15:03:29.790561 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790565 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:29.790568 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:29.790572 | orchestrator | 2025-06-11 15:03:29.790576 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-11 15:03:29.790580 | orchestrator | Wednesday 11 June 2025 15:03:19 +0000 (0:00:12.261) 0:03:07.706 ******** 2025-06-11 15:03:29.790584 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:29.790588 | orchestrator | 2025-06-11 15:03:29.790592 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:03:29.790596 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-11 15:03:29.790600 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:03:29.790604 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:03:29.790608 | orchestrator | 2025-06-11 15:03:29.790612 | orchestrator | 2025-06-11 15:03:29.790619 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:03:29.790623 | orchestrator | Wednesday 11 June 2025 15:03:26 +0000 (0:00:07.115) 0:03:14.821 ******** 2025-06-11 15:03:29.790627 | orchestrator | =============================================================================== 2025-06-11 15:03:29.790630 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.18s 2025-06-11 15:03:29.790634 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.98s 2025-06-11 15:03:29.790644 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.81s 2025-06-11 15:03:29.790648 | orchestrator | designate : Restart designate-api container ---------------------------- 13.08s 2025-06-11 15:03:29.790651 | orchestrator | designate : Restart designate-worker container ------------------------- 12.26s 2025-06-11 15:03:29.790655 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.13s 2025-06-11 15:03:29.790659 | orchestrator | designate : Restart designate-central container ------------------------ 10.93s 2025-06-11 15:03:29.790663 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.12s 2025-06-11 15:03:29.790667 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.84s 2025-06-11 15:03:29.790671 | orchestrator | designate : Copying over config.json files for services ----------------- 6.67s 2025-06-11 15:03:29.790675 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.61s 2025-06-11 15:03:29.790678 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.83s 2025-06-11 15:03:29.790682 | orchestrator | designate : Restart designate-producer container ------------------------ 5.62s 2025-06-11 15:03:29.790686 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.17s 2025-06-11 15:03:29.790690 | orchestrator | designate : Check designate containers ---------------------------------- 5.12s 2025-06-11 15:03:29.790694 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.57s 2025-06-11 15:03:29.790697 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.71s 2025-06-11 15:03:29.790701 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.61s 2025-06-11 15:03:29.790705 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.59s 2025-06-11 15:03:29.790709 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.54s 2025-06-11 15:03:29.790713 | orchestrator | 2025-06-11 15:03:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:32.828679 | orchestrator | 2025-06-11 15:03:32 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state STARTED 2025-06-11 15:03:32.830419 | orchestrator | 2025-06-11 15:03:32 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:32.832028 | orchestrator | 2025-06-11 15:03:32 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:32.833399 | orchestrator | 2025-06-11 15:03:32 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:32.833433 | orchestrator | 2025-06-11 15:03:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:35.895380 | orchestrator | 2025-06-11 15:03:35 | INFO  | Task 80a10342-51e7-4eeb-a64c-22e98b230789 is in state SUCCESS 2025-06-11 15:03:35.896419 | orchestrator | 2025-06-11 15:03:35.896460 | orchestrator | 2025-06-11 15:03:35.896475 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:03:35.896487 | orchestrator | 2025-06-11 15:03:35.896499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:03:35.896512 | orchestrator | Wednesday 11 June 2025 15:00:05 +0000 (0:00:00.229) 0:00:00.229 ******** 2025-06-11 15:03:35.896524 | orchestrator | ok: [testbed-manager] 2025-06-11 15:03:35.896537 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:03:35.896549 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:03:35.896561 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:03:35.896573 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:03:35.896584 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:03:35.896596 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:03:35.896607 | orchestrator | 2025-06-11 15:03:35.896619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:03:35.896682 | orchestrator | Wednesday 11 June 2025 15:00:06 +0000 (0:00:00.752) 0:00:00.981 ******** 2025-06-11 15:03:35.896695 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-11 15:03:35.896819 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-11 15:03:35.896870 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-11 15:03:35.897705 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-11 15:03:35.897722 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-11 15:03:35.897748 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-11 15:03:35.897758 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-11 15:03:35.897768 | orchestrator | 2025-06-11 15:03:35.897778 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-11 15:03:35.897788 | orchestrator | 2025-06-11 15:03:35.897797 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-11 15:03:35.897807 | orchestrator | Wednesday 11 June 2025 15:00:06 +0000 (0:00:00.766) 0:00:01.748 ******** 2025-06-11 15:03:35.897818 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:03:35.897829 | orchestrator | 2025-06-11 15:03:35.897838 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-11 15:03:35.897847 | orchestrator | Wednesday 11 June 2025 15:00:08 +0000 (0:00:01.591) 0:00:03.339 ******** 2025-06-11 15:03:35.897875 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 15:03:35.897890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897965 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.897987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898376 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 15:03:35.898391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898443 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898602 | orchestrator | 2025-06-11 15:03:35.898612 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-11 15:03:35.898622 | orchestrator | Wednesday 11 June 2025 15:00:11 +0000 (0:00:03.349) 0:00:06.688 ******** 2025-06-11 15:03:35.898633 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:03:35.898643 | orchestrator | 2025-06-11 15:03:35.898653 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-11 15:03:35.898662 | orchestrator | Wednesday 11 June 2025 15:00:13 +0000 (0:00:01.373) 0:00:08.062 ******** 2025-06-11 15:03:35.898677 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 15:03:35.898743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898811 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.898864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.898979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.898997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899053 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 15:03:35.899076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899146 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.899199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.899237 | orchestrator | 2025-06-11 15:03:35.899247 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-11 15:03:35.899257 | orchestrator | Wednesday 11 June 2025 15:00:19 +0000 (0:00:05.835) 0:00:13.897 ******** 2025-06-11 15:03:35.899267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-11 15:03:35.899333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899354 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-11 15:03:35.899391 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899401 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.899412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899540 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.899550 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.899559 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.899576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899606 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.899616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899659 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.899669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899706 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.899716 | orchestrator | 2025-06-11 15:03:35.899726 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-11 15:03:35.899735 | orchestrator | Wednesday 11 June 2025 15:00:20 +0000 (0:00:01.439) 0:00:15.337 ******** 2025-06-11 15:03:35.899745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-11 15:03:35.899765 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899796 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-11 15:03:35.899813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899824 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899875 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.899884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899914 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.899930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.899940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899961 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.899971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.899986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.899996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.900006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-11 15:03:35.900026 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.900041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.900051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900093 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.900103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.900135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900155 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.900165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-11 15:03:35.900175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-11 15:03:35.900208 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.900218 | orchestrator | 2025-06-11 15:03:35.900228 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-11 15:03:35.900238 | orchestrator | Wednesday 11 June 2025 15:00:22 +0000 (0:00:01.849) 0:00:17.186 ******** 2025-06-11 15:03:35.900248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 15:03:35.900258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.900354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900378 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 15:03:35.900466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.900551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.900596 | orchestrator | 2025-06-11 15:03:35.900606 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-11 15:03:35.900616 | orchestrator | Wednesday 11 June 2025 15:00:28 +0000 (0:00:05.652) 0:00:22.839 ******** 2025-06-11 15:03:35.900625 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 15:03:35.900635 | orchestrator | 2025-06-11 15:03:35.900645 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-11 15:03:35.900659 | orchestrator | Wednesday 11 June 2025 15:00:28 +0000 (0:00:00.860) 0:00:23.700 ******** 2025-06-11 15:03:35.900670 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900680 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900691 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900705 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900715 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.900731 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900747 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900757 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900767 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094618, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900777 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900804 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900820 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900830 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900847 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900857 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900867 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900882 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900892 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900907 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094608, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.900917 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900933 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900944 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900968 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900978 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.900993 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901003 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901018 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901029 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901038 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901052 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901067 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094585, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.901078 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901087 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901104 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901132 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901142 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901172 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901182 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901192 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901326 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901336 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901351 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901367 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901377 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094587, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.901387 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901427 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901464 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901481 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901490 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901501 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901538 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901550 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901560 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901585 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901595 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094605, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.901605 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901614 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901663 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901673 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901694 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901705 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901715 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901725 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901793 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901803 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901824 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901834 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901854 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094592, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.089046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.901869 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901880 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901890 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901905 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901920 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901930 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901939 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901955 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901965 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.901981 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094600, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.091046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.901991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902045 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902075 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902085 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902101 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.902167 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902297 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902320 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902336 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902350 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902393 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094609, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0930462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902426 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902438 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902453 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902463 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902489 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902505 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902535 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.902550 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902560 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902570 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902580 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.902594 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094615, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902626 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902645 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.902660 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.902670 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-11 15:03:35.902679 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.902688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094640, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1010463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902696 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094611, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.094046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902715 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094590, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902723 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094598, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094582, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.087046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902740 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094606, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.092046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902752 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094638, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.1000462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902760 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094596, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0900462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094620, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.095046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-11 15:03:35.902784 | orchestrator | 2025-06-11 15:03:35.902792 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-11 15:03:35.902800 | orchestrator | Wednesday 11 June 2025 15:00:51 +0000 (0:00:22.297) 0:00:45.998 ******** 2025-06-11 15:03:35.902808 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 15:03:35.902816 | orchestrator | 2025-06-11 15:03:35.902824 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-11 15:03:35.902831 | orchestrator | Wednesday 11 June 2025 15:00:52 +0000 (0:00:01.454) 0:00:47.452 ******** 2025-06-11 15:03:35.902839 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.902851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902859 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.902867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902875 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.902883 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.902891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902899 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.902909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902918 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.902927 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.902936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902945 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.902954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902962 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.902971 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.902980 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.902989 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.902998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903007 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.903016 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.903026 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903035 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.903043 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903052 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.903061 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.903070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903079 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.903088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903097 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.903105 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.903138 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903147 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-11 15:03:35.903156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-11 15:03:35.903165 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-11 15:03:35.903179 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 15:03:35.903188 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:03:35.903198 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-11 15:03:35.903207 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 15:03:35.903215 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-11 15:03:35.903224 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-11 15:03:35.903233 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-11 15:03:35.903242 | orchestrator | 2025-06-11 15:03:35.903251 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-11 15:03:35.903260 | orchestrator | Wednesday 11 June 2025 15:00:55 +0000 (0:00:02.423) 0:00:49.876 ******** 2025-06-11 15:03:35.903268 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903276 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903284 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.903292 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903299 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903307 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.903315 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903322 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903330 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903338 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903346 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-11 15:03:35.903353 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903361 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-11 15:03:35.903369 | orchestrator | 2025-06-11 15:03:35.903376 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-11 15:03:35.903385 | orchestrator | Wednesday 11 June 2025 15:01:19 +0000 (0:00:24.085) 0:01:13.961 ******** 2025-06-11 15:03:35.903392 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903400 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.903408 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903416 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903428 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903436 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.903444 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903452 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903459 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903467 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903475 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-11 15:03:35.903482 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903490 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-11 15:03:35.903498 | orchestrator | 2025-06-11 15:03:35.903505 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-11 15:03:35.903513 | orchestrator | Wednesday 11 June 2025 15:01:24 +0000 (0:00:05.152) 0:01:19.114 ******** 2025-06-11 15:03:35.903521 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903534 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.903542 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-11 15:03:35.903550 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903558 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903566 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903574 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903581 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903589 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.903597 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903604 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903616 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-11 15:03:35.903624 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903631 | orchestrator | 2025-06-11 15:03:35.903639 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-11 15:03:35.903647 | orchestrator | Wednesday 11 June 2025 15:01:27 +0000 (0:00:03.291) 0:01:22.406 ******** 2025-06-11 15:03:35.903655 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 15:03:35.903662 | orchestrator | 2025-06-11 15:03:35.903670 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-11 15:03:35.903678 | orchestrator | Wednesday 11 June 2025 15:01:28 +0000 (0:00:00.644) 0:01:23.050 ******** 2025-06-11 15:03:35.903685 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.903693 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.903701 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903708 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.903716 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903724 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903731 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903739 | orchestrator | 2025-06-11 15:03:35.903746 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-11 15:03:35.903754 | orchestrator | Wednesday 11 June 2025 15:01:29 +0000 (0:00:01.421) 0:01:24.471 ******** 2025-06-11 15:03:35.903762 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.903769 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903777 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.903785 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903792 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903800 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.903807 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.903815 | orchestrator | 2025-06-11 15:03:35.903823 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-11 15:03:35.903830 | orchestrator | Wednesday 11 June 2025 15:01:33 +0000 (0:00:03.755) 0:01:28.227 ******** 2025-06-11 15:03:35.903838 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903846 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903854 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903861 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.903869 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.903876 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903890 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903898 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.903905 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903913 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.903921 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903932 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.903941 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-11 15:03:35.903948 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.903956 | orchestrator | 2025-06-11 15:03:35.903964 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-11 15:03:35.903971 | orchestrator | Wednesday 11 June 2025 15:01:35 +0000 (0:00:02.192) 0:01:30.419 ******** 2025-06-11 15:03:35.903979 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.903987 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.903995 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.904002 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.904010 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.904018 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.904025 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.904033 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.904041 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.904048 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.904056 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-11 15:03:35.904064 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-11 15:03:35.904072 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.904079 | orchestrator | 2025-06-11 15:03:35.904087 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-11 15:03:35.904095 | orchestrator | Wednesday 11 June 2025 15:01:37 +0000 (0:00:02.016) 0:01:32.435 ******** 2025-06-11 15:03:35.904102 | orchestrator | [WARNING]: Skipped 2025-06-11 15:03:35.904125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-11 15:03:35.904133 | orchestrator | due to this access issue: 2025-06-11 15:03:35.904141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-11 15:03:35.904149 | orchestrator | not a directory 2025-06-11 15:03:35.904156 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-11 15:03:35.904164 | orchestrator | 2025-06-11 15:03:35.904176 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-11 15:03:35.904184 | orchestrator | Wednesday 11 June 2025 15:01:39 +0000 (0:00:01.675) 0:01:34.111 ******** 2025-06-11 15:03:35.904192 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.904199 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.904207 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.904214 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.904222 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.904230 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.904238 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.904245 | orchestrator | 2025-06-11 15:03:35.904253 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-11 15:03:35.904261 | orchestrator | Wednesday 11 June 2025 15:01:40 +0000 (0:00:00.943) 0:01:35.055 ******** 2025-06-11 15:03:35.904274 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.904282 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:03:35.904290 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:03:35.904297 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:03:35.904305 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:03:35.904313 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:03:35.904320 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:03:35.904328 | orchestrator | 2025-06-11 15:03:35.904335 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-11 15:03:35.904343 | orchestrator | Wednesday 11 June 2025 15:01:41 +0000 (0:00:01.114) 0:01:36.169 ******** 2025-06-11 15:03:35.904352 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-11 15:03:35.904365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904415 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-11 15:03:35.904432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904482 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904556 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-11 15:03:35.904571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904600 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-11 15:03:35.904616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-11 15:03:35.904649 | orchestrator | 2025-06-11 15:03:35.904657 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-11 15:03:35.904665 | orchestrator | Wednesday 11 June 2025 15:01:45 +0000 (0:00:04.510) 0:01:40.680 ******** 2025-06-11 15:03:35.904672 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-11 15:03:35.904680 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:03:35.904688 | orchestrator | 2025-06-11 15:03:35.904696 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904704 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:01.186) 0:01:41.867 ******** 2025-06-11 15:03:35.904711 | orchestrator | 2025-06-11 15:03:35.904719 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904727 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:00.379) 0:01:42.247 ******** 2025-06-11 15:03:35.904735 | orchestrator | 2025-06-11 15:03:35.904742 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904750 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:00.104) 0:01:42.351 ******** 2025-06-11 15:03:35.904758 | orchestrator | 2025-06-11 15:03:35.904765 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904773 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:00.129) 0:01:42.480 ******** 2025-06-11 15:03:35.904781 | orchestrator | 2025-06-11 15:03:35.904789 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904797 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:00.095) 0:01:42.576 ******** 2025-06-11 15:03:35.904805 | orchestrator | 2025-06-11 15:03:35.904812 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904820 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:00.099) 0:01:42.676 ******** 2025-06-11 15:03:35.904828 | orchestrator | 2025-06-11 15:03:35.904835 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-11 15:03:35.904843 | orchestrator | Wednesday 11 June 2025 15:01:48 +0000 (0:00:00.096) 0:01:42.773 ******** 2025-06-11 15:03:35.904851 | orchestrator | 2025-06-11 15:03:35.904858 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-11 15:03:35.904866 | orchestrator | Wednesday 11 June 2025 15:01:48 +0000 (0:00:00.143) 0:01:42.916 ******** 2025-06-11 15:03:35.904874 | orchestrator | changed: [testbed-manager] 2025-06-11 15:03:35.904881 | orchestrator | 2025-06-11 15:03:35.904890 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-11 15:03:35.904898 | orchestrator | Wednesday 11 June 2025 15:02:01 +0000 (0:00:13.759) 0:01:56.676 ******** 2025-06-11 15:03:35.904905 | orchestrator | changed: [testbed-manager] 2025-06-11 15:03:35.904913 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.904921 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.904929 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:03:35.904941 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:03:35.904949 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.904956 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:03:35.904983 | orchestrator | 2025-06-11 15:03:35.904990 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-11 15:03:35.904998 | orchestrator | Wednesday 11 June 2025 15:02:13 +0000 (0:00:11.514) 0:02:08.191 ******** 2025-06-11 15:03:35.905006 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.905013 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.905021 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.905029 | orchestrator | 2025-06-11 15:03:35.905036 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-11 15:03:35.905044 | orchestrator | Wednesday 11 June 2025 15:02:26 +0000 (0:00:13.053) 0:02:21.245 ******** 2025-06-11 15:03:35.905052 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.905059 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.905067 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.905074 | orchestrator | 2025-06-11 15:03:35.905082 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-11 15:03:35.905090 | orchestrator | Wednesday 11 June 2025 15:02:39 +0000 (0:00:13.094) 0:02:34.339 ******** 2025-06-11 15:03:35.905097 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.905105 | orchestrator | changed: [testbed-manager] 2025-06-11 15:03:35.905127 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.905135 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:03:35.905143 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:03:35.905151 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:03:35.905158 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.905166 | orchestrator | 2025-06-11 15:03:35.905174 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-11 15:03:35.905182 | orchestrator | Wednesday 11 June 2025 15:02:57 +0000 (0:00:17.729) 0:02:52.068 ******** 2025-06-11 15:03:35.905189 | orchestrator | changed: [testbed-manager] 2025-06-11 15:03:35.905197 | orchestrator | 2025-06-11 15:03:35.905205 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-11 15:03:35.905213 | orchestrator | Wednesday 11 June 2025 15:03:07 +0000 (0:00:09.933) 0:03:02.002 ******** 2025-06-11 15:03:35.905220 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:03:35.905228 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:03:35.905235 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:03:35.905243 | orchestrator | 2025-06-11 15:03:35.905251 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-11 15:03:35.905258 | orchestrator | Wednesday 11 June 2025 15:03:18 +0000 (0:00:11.054) 0:03:13.057 ******** 2025-06-11 15:03:35.905266 | orchestrator | changed: [testbed-manager] 2025-06-11 15:03:35.905273 | orchestrator | 2025-06-11 15:03:35.905281 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-11 15:03:35.905289 | orchestrator | Wednesday 11 June 2025 15:03:23 +0000 (0:00:05.129) 0:03:18.187 ******** 2025-06-11 15:03:35.905300 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:03:35.905308 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:03:35.905316 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:03:35.905323 | orchestrator | 2025-06-11 15:03:35.905331 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:03:35.905339 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-11 15:03:35.905347 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:03:35.905355 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:03:35.905363 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:03:35.905371 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-11 15:03:35.905383 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-11 15:03:35.905391 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-11 15:03:35.905399 | orchestrator | 2025-06-11 15:03:35.905407 | orchestrator | 2025-06-11 15:03:35.905415 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:03:35.905422 | orchestrator | Wednesday 11 June 2025 15:03:33 +0000 (0:00:10.542) 0:03:28.729 ******** 2025-06-11 15:03:35.905430 | orchestrator | =============================================================================== 2025-06-11 15:03:35.905438 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 24.09s 2025-06-11 15:03:35.905446 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.30s 2025-06-11 15:03:35.905453 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.73s 2025-06-11 15:03:35.905461 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.76s 2025-06-11 15:03:35.905468 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.09s 2025-06-11 15:03:35.905476 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.05s 2025-06-11 15:03:35.905484 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.51s 2025-06-11 15:03:35.905496 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.05s 2025-06-11 15:03:35.905504 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.54s 2025-06-11 15:03:35.905512 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.93s 2025-06-11 15:03:35.905519 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.84s 2025-06-11 15:03:35.905527 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.65s 2025-06-11 15:03:35.905535 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.15s 2025-06-11 15:03:35.905542 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.13s 2025-06-11 15:03:35.905550 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.51s 2025-06-11 15:03:35.905558 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.76s 2025-06-11 15:03:35.905565 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.35s 2025-06-11 15:03:35.905573 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.29s 2025-06-11 15:03:35.905580 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.42s 2025-06-11 15:03:35.905588 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.19s 2025-06-11 15:03:35.905596 | orchestrator | 2025-06-11 15:03:35 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:35.905603 | orchestrator | 2025-06-11 15:03:35 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:35.905611 | orchestrator | 2025-06-11 15:03:35 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:35.908557 | orchestrator | 2025-06-11 15:03:35 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:35.908589 | orchestrator | 2025-06-11 15:03:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:38.954886 | orchestrator | 2025-06-11 15:03:38 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:38.955816 | orchestrator | 2025-06-11 15:03:38 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:38.957336 | orchestrator | 2025-06-11 15:03:38 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:38.958305 | orchestrator | 2025-06-11 15:03:38 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:38.958765 | orchestrator | 2025-06-11 15:03:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:41.996713 | orchestrator | 2025-06-11 15:03:41 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:42.011023 | orchestrator | 2025-06-11 15:03:41 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:42.011176 | orchestrator | 2025-06-11 15:03:42 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:42.011203 | orchestrator | 2025-06-11 15:03:42 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:42.011224 | orchestrator | 2025-06-11 15:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:45.033945 | orchestrator | 2025-06-11 15:03:45 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:45.034422 | orchestrator | 2025-06-11 15:03:45 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:45.034911 | orchestrator | 2025-06-11 15:03:45 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:45.035977 | orchestrator | 2025-06-11 15:03:45 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:45.036012 | orchestrator | 2025-06-11 15:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:48.076204 | orchestrator | 2025-06-11 15:03:48 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:48.076656 | orchestrator | 2025-06-11 15:03:48 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:48.077825 | orchestrator | 2025-06-11 15:03:48 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:48.078083 | orchestrator | 2025-06-11 15:03:48 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:48.078371 | orchestrator | 2025-06-11 15:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:51.106165 | orchestrator | 2025-06-11 15:03:51 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:51.108068 | orchestrator | 2025-06-11 15:03:51 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:51.110381 | orchestrator | 2025-06-11 15:03:51 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state STARTED 2025-06-11 15:03:51.112366 | orchestrator | 2025-06-11 15:03:51 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:51.112397 | orchestrator | 2025-06-11 15:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:54.150819 | orchestrator | 2025-06-11 15:03:54 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:03:54.152436 | orchestrator | 2025-06-11 15:03:54 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:54.153479 | orchestrator | 2025-06-11 15:03:54 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:54.154706 | orchestrator | 2025-06-11 15:03:54 | INFO  | Task 41773daf-1726-4e75-b968-1129e0eb4f60 is in state SUCCESS 2025-06-11 15:03:54.155700 | orchestrator | 2025-06-11 15:03:54 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:54.155750 | orchestrator | 2025-06-11 15:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:03:57.195382 | orchestrator | 2025-06-11 15:03:57 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:03:57.195478 | orchestrator | 2025-06-11 15:03:57 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:03:57.196685 | orchestrator | 2025-06-11 15:03:57 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:03:57.200540 | orchestrator | 2025-06-11 15:03:57 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:03:57.200571 | orchestrator | 2025-06-11 15:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:00.235553 | orchestrator | 2025-06-11 15:04:00 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:00.235637 | orchestrator | 2025-06-11 15:04:00 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:00.236116 | orchestrator | 2025-06-11 15:04:00 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:00.237496 | orchestrator | 2025-06-11 15:04:00 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:00.237524 | orchestrator | 2025-06-11 15:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:03.278641 | orchestrator | 2025-06-11 15:04:03 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:03.283736 | orchestrator | 2025-06-11 15:04:03 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:03.283832 | orchestrator | 2025-06-11 15:04:03 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:03.285538 | orchestrator | 2025-06-11 15:04:03 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:03.285935 | orchestrator | 2025-06-11 15:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:06.328614 | orchestrator | 2025-06-11 15:04:06 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:06.331757 | orchestrator | 2025-06-11 15:04:06 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:06.334231 | orchestrator | 2025-06-11 15:04:06 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:06.335076 | orchestrator | 2025-06-11 15:04:06 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:06.335121 | orchestrator | 2025-06-11 15:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:09.369253 | orchestrator | 2025-06-11 15:04:09 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:09.369502 | orchestrator | 2025-06-11 15:04:09 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:09.372051 | orchestrator | 2025-06-11 15:04:09 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:09.373793 | orchestrator | 2025-06-11 15:04:09 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:09.373847 | orchestrator | 2025-06-11 15:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:12.404701 | orchestrator | 2025-06-11 15:04:12 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:12.405343 | orchestrator | 2025-06-11 15:04:12 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:12.406400 | orchestrator | 2025-06-11 15:04:12 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:12.408113 | orchestrator | 2025-06-11 15:04:12 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:12.408144 | orchestrator | 2025-06-11 15:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:15.438491 | orchestrator | 2025-06-11 15:04:15 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:15.439452 | orchestrator | 2025-06-11 15:04:15 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:15.443578 | orchestrator | 2025-06-11 15:04:15 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:15.443619 | orchestrator | 2025-06-11 15:04:15 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:15.443632 | orchestrator | 2025-06-11 15:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:18.482404 | orchestrator | 2025-06-11 15:04:18 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:18.483373 | orchestrator | 2025-06-11 15:04:18 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:18.487780 | orchestrator | 2025-06-11 15:04:18 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:18.488852 | orchestrator | 2025-06-11 15:04:18 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:18.488895 | orchestrator | 2025-06-11 15:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:21.529111 | orchestrator | 2025-06-11 15:04:21 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:21.532248 | orchestrator | 2025-06-11 15:04:21 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:21.533763 | orchestrator | 2025-06-11 15:04:21 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:21.535679 | orchestrator | 2025-06-11 15:04:21 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:21.535888 | orchestrator | 2025-06-11 15:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:24.581245 | orchestrator | 2025-06-11 15:04:24 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state STARTED 2025-06-11 15:04:24.582506 | orchestrator | 2025-06-11 15:04:24 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:24.584418 | orchestrator | 2025-06-11 15:04:24 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:24.586841 | orchestrator | 2025-06-11 15:04:24 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:24.586920 | orchestrator | 2025-06-11 15:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:27.640246 | orchestrator | 2025-06-11 15:04:27 | INFO  | Task d327bed4-1b36-4085-90b8-ffbb2cd3dbcb is in state SUCCESS 2025-06-11 15:04:27.642216 | orchestrator | 2025-06-11 15:04:27 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:27.644619 | orchestrator | 2025-06-11 15:04:27 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:27.646146 | orchestrator | 2025-06-11 15:04:27 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:27.646402 | orchestrator | 2025-06-11 15:04:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:30.679223 | orchestrator | 2025-06-11 15:04:30 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:30.679466 | orchestrator | 2025-06-11 15:04:30 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:30.680513 | orchestrator | 2025-06-11 15:04:30 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:30.685296 | orchestrator | 2025-06-11 15:04:30 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:30.685332 | orchestrator | 2025-06-11 15:04:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:33.719135 | orchestrator | 2025-06-11 15:04:33 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state STARTED 2025-06-11 15:04:33.722686 | orchestrator | 2025-06-11 15:04:33 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:33.725884 | orchestrator | 2025-06-11 15:04:33 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:33.728802 | orchestrator | 2025-06-11 15:04:33 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:33.728836 | orchestrator | 2025-06-11 15:04:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:36.768389 | orchestrator | 2025-06-11 15:04:36.768478 | orchestrator | 2025-06-11 15:04:36.768494 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-11 15:04:36.768506 | orchestrator | 2025-06-11 15:04:36.768517 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-11 15:04:36.768623 | orchestrator | Wednesday 11 June 2025 15:02:21 +0000 (0:00:00.085) 0:00:00.085 ******** 2025-06-11 15:04:36.768637 | orchestrator | changed: [localhost] 2025-06-11 15:04:36.768687 | orchestrator | 2025-06-11 15:04:36.768699 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-11 15:04:36.768710 | orchestrator | Wednesday 11 June 2025 15:02:22 +0000 (0:00:00.879) 0:00:00.965 ******** 2025-06-11 15:04:36.768722 | orchestrator | changed: [localhost] 2025-06-11 15:04:36.768733 | orchestrator | 2025-06-11 15:04:36.768744 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-11 15:04:36.768755 | orchestrator | Wednesday 11 June 2025 15:03:47 +0000 (0:01:24.906) 0:01:25.871 ******** 2025-06-11 15:04:36.768766 | orchestrator | changed: [localhost] 2025-06-11 15:04:36.768777 | orchestrator | 2025-06-11 15:04:36.768788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:04:36.768799 | orchestrator | 2025-06-11 15:04:36.768810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:04:36.768821 | orchestrator | Wednesday 11 June 2025 15:03:51 +0000 (0:00:03.902) 0:01:29.773 ******** 2025-06-11 15:04:36.768832 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:04:36.768843 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:04:36.768854 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:04:36.768865 | orchestrator | 2025-06-11 15:04:36.768876 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:04:36.768887 | orchestrator | Wednesday 11 June 2025 15:03:51 +0000 (0:00:00.258) 0:01:30.032 ******** 2025-06-11 15:04:36.768898 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-11 15:04:36.768910 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-11 15:04:36.768921 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-11 15:04:36.768933 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-11 15:04:36.768947 | orchestrator | 2025-06-11 15:04:36.768958 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-11 15:04:36.768970 | orchestrator | skipping: no hosts matched 2025-06-11 15:04:36.768981 | orchestrator | 2025-06-11 15:04:36.769073 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:04:36.769090 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.769127 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.769140 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.769151 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.769161 | orchestrator | 2025-06-11 15:04:36.769172 | orchestrator | 2025-06-11 15:04:36.769183 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:04:36.769194 | orchestrator | Wednesday 11 June 2025 15:03:51 +0000 (0:00:00.350) 0:01:30.382 ******** 2025-06-11 15:04:36.769225 | orchestrator | =============================================================================== 2025-06-11 15:04:36.769236 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 84.91s 2025-06-11 15:04:36.769247 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.90s 2025-06-11 15:04:36.769258 | orchestrator | Ensure the destination directory exists --------------------------------- 0.88s 2025-06-11 15:04:36.769269 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-06-11 15:04:36.769280 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-06-11 15:04:36.769291 | orchestrator | 2025-06-11 15:04:36.769301 | orchestrator | 2025-06-11 15:04:36.769312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:04:36.769323 | orchestrator | 2025-06-11 15:04:36.769333 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:04:36.769344 | orchestrator | Wednesday 11 June 2025 15:03:55 +0000 (0:00:00.229) 0:00:00.229 ******** 2025-06-11 15:04:36.769355 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:04:36.769366 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:04:36.769377 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:04:36.769387 | orchestrator | ok: [testbed-manager] 2025-06-11 15:04:36.769398 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:04:36.769409 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:04:36.769419 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:04:36.769430 | orchestrator | 2025-06-11 15:04:36.769441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:04:36.769451 | orchestrator | Wednesday 11 June 2025 15:03:56 +0000 (0:00:00.690) 0:00:00.919 ******** 2025-06-11 15:04:36.769462 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769473 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769484 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769495 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769506 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769516 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769527 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-11 15:04:36.769538 | orchestrator | 2025-06-11 15:04:36.769548 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-11 15:04:36.769559 | orchestrator | 2025-06-11 15:04:36.769588 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-11 15:04:36.769599 | orchestrator | Wednesday 11 June 2025 15:03:56 +0000 (0:00:00.618) 0:00:01.538 ******** 2025-06-11 15:04:36.769610 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:04:36.769622 | orchestrator | 2025-06-11 15:04:36.769633 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-11 15:04:36.769644 | orchestrator | Wednesday 11 June 2025 15:03:57 +0000 (0:00:01.219) 0:00:02.758 ******** 2025-06-11 15:04:36.769662 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-11 15:04:36.769673 | orchestrator | 2025-06-11 15:04:36.769683 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-11 15:04:36.769696 | orchestrator | Wednesday 11 June 2025 15:04:01 +0000 (0:00:03.373) 0:00:06.131 ******** 2025-06-11 15:04:36.769709 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-11 15:04:36.769722 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-11 15:04:36.769734 | orchestrator | 2025-06-11 15:04:36.769746 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-11 15:04:36.769856 | orchestrator | Wednesday 11 June 2025 15:04:08 +0000 (0:00:06.724) 0:00:12.856 ******** 2025-06-11 15:04:36.769879 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:04:36.769891 | orchestrator | 2025-06-11 15:04:36.769904 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-11 15:04:36.769917 | orchestrator | Wednesday 11 June 2025 15:04:11 +0000 (0:00:03.182) 0:00:16.038 ******** 2025-06-11 15:04:36.769929 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:04:36.769942 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-11 15:04:36.769955 | orchestrator | 2025-06-11 15:04:36.769968 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-11 15:04:36.769985 | orchestrator | Wednesday 11 June 2025 15:04:15 +0000 (0:00:04.024) 0:00:20.063 ******** 2025-06-11 15:04:36.769998 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:04:36.770009 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-11 15:04:36.770120 | orchestrator | 2025-06-11 15:04:36.770133 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-11 15:04:36.770144 | orchestrator | Wednesday 11 June 2025 15:04:21 +0000 (0:00:06.648) 0:00:26.711 ******** 2025-06-11 15:04:36.770155 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-11 15:04:36.770166 | orchestrator | 2025-06-11 15:04:36.770176 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:04:36.770188 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770199 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770210 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770221 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770232 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770243 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770254 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:04:36.770264 | orchestrator | 2025-06-11 15:04:36.770275 | orchestrator | 2025-06-11 15:04:36.770286 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:04:36.770297 | orchestrator | Wednesday 11 June 2025 15:04:26 +0000 (0:00:04.836) 0:00:31.547 ******** 2025-06-11 15:04:36.770308 | orchestrator | =============================================================================== 2025-06-11 15:04:36.770319 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.72s 2025-06-11 15:04:36.770329 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.65s 2025-06-11 15:04:36.770349 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.84s 2025-06-11 15:04:36.770360 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.02s 2025-06-11 15:04:36.770371 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.37s 2025-06-11 15:04:36.770382 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.18s 2025-06-11 15:04:36.770393 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.22s 2025-06-11 15:04:36.770404 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-06-11 15:04:36.770414 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-06-11 15:04:36.770425 | orchestrator | 2025-06-11 15:04:36.770436 | orchestrator | 2025-06-11 15:04:36.770447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:04:36.770458 | orchestrator | 2025-06-11 15:04:36.770482 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:04:36.770493 | orchestrator | Wednesday 11 June 2025 15:03:30 +0000 (0:00:00.255) 0:00:00.255 ******** 2025-06-11 15:04:36.770504 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:04:36.770515 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:04:36.770526 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:04:36.770537 | orchestrator | 2025-06-11 15:04:36.770658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:04:36.770669 | orchestrator | Wednesday 11 June 2025 15:03:31 +0000 (0:00:00.282) 0:00:00.537 ******** 2025-06-11 15:04:36.770680 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-11 15:04:36.770691 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-11 15:04:36.770701 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-11 15:04:36.770712 | orchestrator | 2025-06-11 15:04:36.770723 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-11 15:04:36.770733 | orchestrator | 2025-06-11 15:04:36.770744 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-11 15:04:36.770755 | orchestrator | Wednesday 11 June 2025 15:03:31 +0000 (0:00:00.388) 0:00:00.925 ******** 2025-06-11 15:04:36.770765 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:04:36.770776 | orchestrator | 2025-06-11 15:04:36.770787 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-11 15:04:36.770797 | orchestrator | Wednesday 11 June 2025 15:03:32 +0000 (0:00:00.518) 0:00:01.444 ******** 2025-06-11 15:04:36.770808 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-11 15:04:36.770819 | orchestrator | 2025-06-11 15:04:36.770829 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-11 15:04:36.770840 | orchestrator | Wednesday 11 June 2025 15:03:35 +0000 (0:00:03.549) 0:00:04.993 ******** 2025-06-11 15:04:36.770850 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-11 15:04:36.770861 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-11 15:04:36.770872 | orchestrator | 2025-06-11 15:04:36.770889 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-11 15:04:36.770900 | orchestrator | Wednesday 11 June 2025 15:03:42 +0000 (0:00:06.591) 0:00:11.584 ******** 2025-06-11 15:04:36.770911 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:04:36.770921 | orchestrator | 2025-06-11 15:04:36.770932 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-11 15:04:36.770943 | orchestrator | Wednesday 11 June 2025 15:03:45 +0000 (0:00:03.378) 0:00:14.963 ******** 2025-06-11 15:04:36.770954 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:04:36.770964 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-11 15:04:36.770984 | orchestrator | 2025-06-11 15:04:36.770995 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-11 15:04:36.771005 | orchestrator | Wednesday 11 June 2025 15:03:49 +0000 (0:00:04.036) 0:00:19.000 ******** 2025-06-11 15:04:36.771016 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:04:36.771027 | orchestrator | 2025-06-11 15:04:36.771038 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-11 15:04:36.771101 | orchestrator | Wednesday 11 June 2025 15:03:52 +0000 (0:00:03.202) 0:00:22.203 ******** 2025-06-11 15:04:36.771113 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-11 15:04:36.771124 | orchestrator | 2025-06-11 15:04:36.771135 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-11 15:04:36.771145 | orchestrator | Wednesday 11 June 2025 15:03:56 +0000 (0:00:03.929) 0:00:26.133 ******** 2025-06-11 15:04:36.771156 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.771167 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:04:36.771178 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:04:36.771189 | orchestrator | 2025-06-11 15:04:36.771199 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-11 15:04:36.771210 | orchestrator | Wednesday 11 June 2025 15:03:57 +0000 (0:00:00.269) 0:00:26.403 ******** 2025-06-11 15:04:36.771224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771280 | orchestrator | 2025-06-11 15:04:36.771296 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-11 15:04:36.771307 | orchestrator | Wednesday 11 June 2025 15:03:57 +0000 (0:00:00.724) 0:00:27.127 ******** 2025-06-11 15:04:36.771318 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.771329 | orchestrator | 2025-06-11 15:04:36.771339 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-11 15:04:36.771350 | orchestrator | Wednesday 11 June 2025 15:03:57 +0000 (0:00:00.111) 0:00:27.238 ******** 2025-06-11 15:04:36.771361 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.771371 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:04:36.771382 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:04:36.771393 | orchestrator | 2025-06-11 15:04:36.771404 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-11 15:04:36.771414 | orchestrator | Wednesday 11 June 2025 15:03:58 +0000 (0:00:00.373) 0:00:27.612 ******** 2025-06-11 15:04:36.771425 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:04:36.771436 | orchestrator | 2025-06-11 15:04:36.771446 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-11 15:04:36.771457 | orchestrator | Wednesday 11 June 2025 15:03:58 +0000 (0:00:00.459) 0:00:28.071 ******** 2025-06-11 15:04:36.771469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771517 | orchestrator | 2025-06-11 15:04:36.771528 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-11 15:04:36.771539 | orchestrator | Wednesday 11 June 2025 15:04:00 +0000 (0:00:01.341) 0:00:29.413 ******** 2025-06-11 15:04:36.771554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771566 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.771578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771587 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:04:36.771597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771608 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:04:36.771617 | orchestrator | 2025-06-11 15:04:36.771632 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-11 15:04:36.771642 | orchestrator | Wednesday 11 June 2025 15:04:00 +0000 (0:00:00.566) 0:00:29.979 ******** 2025-06-11 15:04:36.771652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771667 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.771681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771691 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:04:36.771701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.771711 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:04:36.771720 | orchestrator | 2025-06-11 15:04:36.771730 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-11 15:04:36.771739 | orchestrator | Wednesday 11 June 2025 15:04:01 +0000 (0:00:00.589) 0:00:30.568 ******** 2025-06-11 15:04:36.771750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771797 | orchestrator | 2025-06-11 15:04:36.771807 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-11 15:04:36.771816 | orchestrator | Wednesday 11 June 2025 15:04:02 +0000 (0:00:01.251) 0:00:31.820 ******** 2025-06-11 15:04:36.771826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.771869 | orchestrator | 2025-06-11 15:04:36.771879 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-11 15:04:36.771889 | orchestrator | Wednesday 11 June 2025 15:04:04 +0000 (0:00:02.134) 0:00:33.955 ******** 2025-06-11 15:04:36.771898 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-11 15:04:36.771908 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-11 15:04:36.771917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-11 15:04:36.771927 | orchestrator | 2025-06-11 15:04:36.771937 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-11 15:04:36.771946 | orchestrator | Wednesday 11 June 2025 15:04:06 +0000 (0:00:01.406) 0:00:35.361 ******** 2025-06-11 15:04:36.771956 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:04:36.771965 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:04:36.771975 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:04:36.771984 | orchestrator | 2025-06-11 15:04:36.771994 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-11 15:04:36.772003 | orchestrator | Wednesday 11 June 2025 15:04:07 +0000 (0:00:01.700) 0:00:37.062 ******** 2025-06-11 15:04:36.772021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.772032 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:04:36.772042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.772066 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:04:36.772082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-11 15:04:36.772098 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:04:36.772107 | orchestrator | 2025-06-11 15:04:36.772117 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-11 15:04:36.772126 | orchestrator | Wednesday 11 June 2025 15:04:08 +0000 (0:00:00.715) 0:00:37.778 ******** 2025-06-11 15:04:36.772136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.772151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.772161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-11 15:04:36.772171 | orchestrator | 2025-06-11 15:04:36.772181 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-11 15:04:36.772191 | orchestrator | Wednesday 11 June 2025 15:04:10 +0000 (0:00:01.776) 0:00:39.555 ******** 2025-06-11 15:04:36.772200 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:04:36.772216 | orchestrator | 2025-06-11 15:04:36.772225 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-11 15:04:36.772235 | orchestrator | Wednesday 11 June 2025 15:04:12 +0000 (0:00:02.145) 0:00:41.700 ******** 2025-06-11 15:04:36.772244 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:04:36.772254 | orchestrator | 2025-06-11 15:04:36.772263 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-11 15:04:36.772273 | orchestrator | Wednesday 11 June 2025 15:04:14 +0000 (0:00:02.236) 0:00:43.936 ******** 2025-06-11 15:04:36.772282 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:04:36.772292 | orchestrator | 2025-06-11 15:04:36.772301 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-11 15:04:36.772311 | orchestrator | Wednesday 11 June 2025 15:04:28 +0000 (0:00:13.772) 0:00:57.709 ******** 2025-06-11 15:04:36.772320 | orchestrator | 2025-06-11 15:04:36.772330 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-11 15:04:36.772339 | orchestrator | Wednesday 11 June 2025 15:04:28 +0000 (0:00:00.065) 0:00:57.774 ******** 2025-06-11 15:04:36.772349 | orchestrator | 2025-06-11 15:04:36.772358 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-11 15:04:36.772368 | orchestrator | Wednesday 11 June 2025 15:04:28 +0000 (0:00:00.068) 0:00:57.842 ******** 2025-06-11 15:04:36.772378 | orchestrator | 2025-06-11 15:04:36.772392 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-11 15:04:36.772402 | orchestrator | Wednesday 11 June 2025 15:04:28 +0000 (0:00:00.067) 0:00:57.910 ******** 2025-06-11 15:04:36.772412 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:04:36.772422 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:04:36.772431 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:04:36.772441 | orchestrator | 2025-06-11 15:04:36.772450 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:04:36.772460 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:04:36.772470 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 15:04:36.772480 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 15:04:36.772489 | orchestrator | 2025-06-11 15:04:36.772499 | orchestrator | 2025-06-11 15:04:36.772508 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:04:36.772518 | orchestrator | Wednesday 11 June 2025 15:04:34 +0000 (0:00:05.870) 0:01:03.781 ******** 2025-06-11 15:04:36.772528 | orchestrator | =============================================================================== 2025-06-11 15:04:36.772537 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.77s 2025-06-11 15:04:36.772547 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.59s 2025-06-11 15:04:36.772556 | orchestrator | placement : Restart placement-api container ----------------------------- 5.87s 2025-06-11 15:04:36.772566 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.04s 2025-06-11 15:04:36.772575 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.93s 2025-06-11 15:04:36.772585 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.55s 2025-06-11 15:04:36.772598 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.38s 2025-06-11 15:04:36.772608 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-06-11 15:04:36.772618 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.24s 2025-06-11 15:04:36.772627 | orchestrator | placement : Creating placement databases -------------------------------- 2.15s 2025-06-11 15:04:36.772637 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.13s 2025-06-11 15:04:36.772651 | orchestrator | placement : Check placement containers ---------------------------------- 1.78s 2025-06-11 15:04:36.772661 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.70s 2025-06-11 15:04:36.772671 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.41s 2025-06-11 15:04:36.772680 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.34s 2025-06-11 15:04:36.772690 | orchestrator | placement : Copying over config.json files for services ----------------- 1.25s 2025-06-11 15:04:36.772699 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.72s 2025-06-11 15:04:36.772709 | orchestrator | placement : Copying over existing policy file --------------------------- 0.72s 2025-06-11 15:04:36.772718 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.59s 2025-06-11 15:04:36.772728 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.57s 2025-06-11 15:04:36.772737 | orchestrator | 2025-06-11 15:04:36 | INFO  | Task 7f3d7ff7-129b-48b3-854f-b8d9be2c571a is in state SUCCESS 2025-06-11 15:04:36.772747 | orchestrator | 2025-06-11 15:04:36 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:36.772757 | orchestrator | 2025-06-11 15:04:36 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:36.772766 | orchestrator | 2025-06-11 15:04:36 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:36.772776 | orchestrator | 2025-06-11 15:04:36 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:36.772786 | orchestrator | 2025-06-11 15:04:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:39.820706 | orchestrator | 2025-06-11 15:04:39 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:39.821914 | orchestrator | 2025-06-11 15:04:39 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:39.823812 | orchestrator | 2025-06-11 15:04:39 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:39.825720 | orchestrator | 2025-06-11 15:04:39 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:39.825923 | orchestrator | 2025-06-11 15:04:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:42.868311 | orchestrator | 2025-06-11 15:04:42 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:42.869533 | orchestrator | 2025-06-11 15:04:42 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:42.871180 | orchestrator | 2025-06-11 15:04:42 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:42.872310 | orchestrator | 2025-06-11 15:04:42 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:42.872348 | orchestrator | 2025-06-11 15:04:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:45.922276 | orchestrator | 2025-06-11 15:04:45 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:45.923406 | orchestrator | 2025-06-11 15:04:45 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:45.925406 | orchestrator | 2025-06-11 15:04:45 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:45.926914 | orchestrator | 2025-06-11 15:04:45 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:45.926940 | orchestrator | 2025-06-11 15:04:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:48.966361 | orchestrator | 2025-06-11 15:04:48 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:48.967093 | orchestrator | 2025-06-11 15:04:48 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:48.968414 | orchestrator | 2025-06-11 15:04:48 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:48.969353 | orchestrator | 2025-06-11 15:04:48 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:48.969380 | orchestrator | 2025-06-11 15:04:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:52.023133 | orchestrator | 2025-06-11 15:04:52 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:52.024351 | orchestrator | 2025-06-11 15:04:52 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:52.026152 | orchestrator | 2025-06-11 15:04:52 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:52.027639 | orchestrator | 2025-06-11 15:04:52 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:52.027742 | orchestrator | 2025-06-11 15:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:55.068106 | orchestrator | 2025-06-11 15:04:55 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:55.069057 | orchestrator | 2025-06-11 15:04:55 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:55.071220 | orchestrator | 2025-06-11 15:04:55 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:55.072190 | orchestrator | 2025-06-11 15:04:55 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:55.072235 | orchestrator | 2025-06-11 15:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:04:58.112228 | orchestrator | 2025-06-11 15:04:58 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:04:58.112326 | orchestrator | 2025-06-11 15:04:58 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:04:58.112340 | orchestrator | 2025-06-11 15:04:58 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:04:58.112352 | orchestrator | 2025-06-11 15:04:58 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:04:58.112363 | orchestrator | 2025-06-11 15:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:01.137739 | orchestrator | 2025-06-11 15:05:01 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:01.138012 | orchestrator | 2025-06-11 15:05:01 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state STARTED 2025-06-11 15:05:01.140374 | orchestrator | 2025-06-11 15:05:01 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:01.142489 | orchestrator | 2025-06-11 15:05:01 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:01.142586 | orchestrator | 2025-06-11 15:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:04.183074 | orchestrator | 2025-06-11 15:05:04 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:04.183274 | orchestrator | 2025-06-11 15:05:04 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:04.186551 | orchestrator | 2025-06-11 15:05:04 | INFO  | Task 6e74df46-eb60-463f-82e8-805372389f40 is in state SUCCESS 2025-06-11 15:05:04.187339 | orchestrator | 2025-06-11 15:05:04.187367 | orchestrator | 2025-06-11 15:05:04.187380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:05:04.187413 | orchestrator | 2025-06-11 15:05:04.187426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:05:04.187437 | orchestrator | Wednesday 11 June 2025 15:00:11 +0000 (0:00:00.231) 0:00:00.231 ******** 2025-06-11 15:05:04.187448 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:04.187460 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:04.187471 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:04.187483 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:05:04.187493 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:05:04.187504 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:05:04.187515 | orchestrator | 2025-06-11 15:05:04.187526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:05:04.187537 | orchestrator | Wednesday 11 June 2025 15:00:12 +0000 (0:00:00.717) 0:00:00.949 ******** 2025-06-11 15:05:04.187547 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-11 15:05:04.187559 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-11 15:05:04.187569 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-11 15:05:04.187580 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-11 15:05:04.187591 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-11 15:05:04.187601 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-11 15:05:04.187612 | orchestrator | 2025-06-11 15:05:04.187623 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-11 15:05:04.187634 | orchestrator | 2025-06-11 15:05:04.187645 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-11 15:05:04.187655 | orchestrator | Wednesday 11 June 2025 15:00:13 +0000 (0:00:00.636) 0:00:01.586 ******** 2025-06-11 15:05:04.187668 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:05:04.187679 | orchestrator | 2025-06-11 15:05:04.187701 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-11 15:05:04.187712 | orchestrator | Wednesday 11 June 2025 15:00:14 +0000 (0:00:01.452) 0:00:03.038 ******** 2025-06-11 15:05:04.187723 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:04.187734 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:04.187745 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:04.187755 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:05:04.187766 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:05:04.187776 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:05:04.187803 | orchestrator | 2025-06-11 15:05:04.187815 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-11 15:05:04.188061 | orchestrator | Wednesday 11 June 2025 15:00:16 +0000 (0:00:01.577) 0:00:04.616 ******** 2025-06-11 15:05:04.188082 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:04.188096 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:04.188108 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:04.188121 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:05:04.188133 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:05:04.188145 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:05:04.188158 | orchestrator | 2025-06-11 15:05:04.188171 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-11 15:05:04.188184 | orchestrator | Wednesday 11 June 2025 15:00:17 +0000 (0:00:01.055) 0:00:05.671 ******** 2025-06-11 15:05:04.188197 | orchestrator | ok: [testbed-node-0] => { 2025-06-11 15:05:04.188210 | orchestrator |  "changed": false, 2025-06-11 15:05:04.188222 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.188235 | orchestrator | } 2025-06-11 15:05:04.188248 | orchestrator | ok: [testbed-node-1] => { 2025-06-11 15:05:04.188260 | orchestrator |  "changed": false, 2025-06-11 15:05:04.188274 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.188286 | orchestrator | } 2025-06-11 15:05:04.188299 | orchestrator | ok: [testbed-node-2] => { 2025-06-11 15:05:04.188322 | orchestrator |  "changed": false, 2025-06-11 15:05:04.188333 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.188344 | orchestrator | } 2025-06-11 15:05:04.188354 | orchestrator | ok: [testbed-node-3] => { 2025-06-11 15:05:04.188365 | orchestrator |  "changed": false, 2025-06-11 15:05:04.188908 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.188920 | orchestrator | } 2025-06-11 15:05:04.188932 | orchestrator | ok: [testbed-node-4] => { 2025-06-11 15:05:04.188943 | orchestrator |  "changed": false, 2025-06-11 15:05:04.188955 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.188967 | orchestrator | } 2025-06-11 15:05:04.188978 | orchestrator | ok: [testbed-node-5] => { 2025-06-11 15:05:04.188989 | orchestrator |  "changed": false, 2025-06-11 15:05:04.189001 | orchestrator |  "msg": "All assertions passed" 2025-06-11 15:05:04.189071 | orchestrator | } 2025-06-11 15:05:04.189084 | orchestrator | 2025-06-11 15:05:04.189096 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-11 15:05:04.189107 | orchestrator | Wednesday 11 June 2025 15:00:17 +0000 (0:00:00.663) 0:00:06.335 ******** 2025-06-11 15:05:04.189117 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.189128 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.189139 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.189149 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.189158 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.189168 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.189177 | orchestrator | 2025-06-11 15:05:04.189187 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-11 15:05:04.189197 | orchestrator | Wednesday 11 June 2025 15:00:18 +0000 (0:00:00.525) 0:00:06.860 ******** 2025-06-11 15:05:04.189206 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-11 15:05:04.189216 | orchestrator | 2025-06-11 15:05:04.189225 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-11 15:05:04.189235 | orchestrator | Wednesday 11 June 2025 15:00:21 +0000 (0:00:03.231) 0:00:10.092 ******** 2025-06-11 15:05:04.189245 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-11 15:05:04.189256 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-11 15:05:04.189266 | orchestrator | 2025-06-11 15:05:04.189313 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-11 15:05:04.189325 | orchestrator | Wednesday 11 June 2025 15:00:27 +0000 (0:00:05.991) 0:00:16.084 ******** 2025-06-11 15:05:04.189335 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:05:04.189345 | orchestrator | 2025-06-11 15:05:04.189356 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-11 15:05:04.189366 | orchestrator | Wednesday 11 June 2025 15:00:30 +0000 (0:00:03.117) 0:00:19.205 ******** 2025-06-11 15:05:04.189376 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:05:04.189387 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-11 15:05:04.189397 | orchestrator | 2025-06-11 15:05:04.189408 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-11 15:05:04.189417 | orchestrator | Wednesday 11 June 2025 15:00:34 +0000 (0:00:03.823) 0:00:23.029 ******** 2025-06-11 15:05:04.189427 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:05:04.189438 | orchestrator | 2025-06-11 15:05:04.189448 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-11 15:05:04.189458 | orchestrator | Wednesday 11 June 2025 15:00:38 +0000 (0:00:03.529) 0:00:26.558 ******** 2025-06-11 15:05:04.189469 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-11 15:05:04.189479 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-11 15:05:04.189489 | orchestrator | 2025-06-11 15:05:04.189500 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-11 15:05:04.189520 | orchestrator | Wednesday 11 June 2025 15:00:46 +0000 (0:00:08.019) 0:00:34.578 ******** 2025-06-11 15:05:04.189530 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.189541 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.189551 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.189561 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.189571 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.189582 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.189592 | orchestrator | 2025-06-11 15:05:04.189602 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-11 15:05:04.189619 | orchestrator | Wednesday 11 June 2025 15:00:46 +0000 (0:00:00.656) 0:00:35.234 ******** 2025-06-11 15:05:04.189629 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.189640 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.189650 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.189660 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.189670 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.189681 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.189691 | orchestrator | 2025-06-11 15:05:04.189701 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-11 15:05:04.189711 | orchestrator | Wednesday 11 June 2025 15:00:49 +0000 (0:00:02.532) 0:00:37.767 ******** 2025-06-11 15:05:04.189721 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:04.189731 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:04.189742 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:04.189752 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:05:04.189762 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:05:04.189772 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:05:04.189783 | orchestrator | 2025-06-11 15:05:04.189793 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-11 15:05:04.189803 | orchestrator | Wednesday 11 June 2025 15:00:50 +0000 (0:00:01.500) 0:00:39.268 ******** 2025-06-11 15:05:04.189814 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.189824 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.189834 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.189844 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.189854 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.189865 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.189875 | orchestrator | 2025-06-11 15:05:04.189885 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-11 15:05:04.189896 | orchestrator | Wednesday 11 June 2025 15:00:54 +0000 (0:00:03.470) 0:00:42.738 ******** 2025-06-11 15:05:04.189909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.189953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.189972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.189987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.189999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.190058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.190073 | orchestrator | 2025-06-11 15:05:04.190084 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-11 15:05:04.190095 | orchestrator | Wednesday 11 June 2025 15:00:58 +0000 (0:00:04.122) 0:00:46.861 ******** 2025-06-11 15:05:04.190105 | orchestrator | [WARNING]: Skipped 2025-06-11 15:05:04.190116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-11 15:05:04.190134 | orchestrator | due to this access issue: 2025-06-11 15:05:04.190144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-11 15:05:04.190155 | orchestrator | a directory 2025-06-11 15:05:04.190165 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:05:04.190175 | orchestrator | 2025-06-11 15:05:04.190185 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-11 15:05:04.190226 | orchestrator | Wednesday 11 June 2025 15:00:59 +0000 (0:00:00.856) 0:00:47.717 ******** 2025-06-11 15:05:04.190238 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:05:04.190250 | orchestrator | 2025-06-11 15:05:04.190260 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-11 15:05:04.190271 | orchestrator | Wednesday 11 June 2025 15:01:01 +0000 (0:00:01.744) 0:00:49.461 ******** 2025-06-11 15:05:04.190282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.190300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.190319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.190337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.190404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.190422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.190438 | orchestrator | 2025-06-11 15:05:04.190455 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-11 15:05:04.190472 | orchestrator | Wednesday 11 June 2025 15:01:05 +0000 (0:00:04.733) 0:00:54.195 ******** 2025-06-11 15:05:04.190492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190503 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.190513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190530 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.190540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190594 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.190603 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.190618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190629 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.190638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190648 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.190658 | orchestrator | 2025-06-11 15:05:04.190667 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-11 15:05:04.190677 | orchestrator | Wednesday 11 June 2025 15:01:09 +0000 (0:00:03.781) 0:00:57.976 ******** 2025-06-11 15:05:04.190687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190702 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.190737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190749 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.190759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.190769 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.190788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190798 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.190808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190824 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.190834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.190844 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.190854 | orchestrator | 2025-06-11 15:05:04.190863 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-11 15:05:04.190873 | orchestrator | Wednesday 11 June 2025 15:01:13 +0000 (0:00:03.772) 0:01:01.749 ******** 2025-06-11 15:05:04.190883 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.190892 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.190902 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.190912 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.190921 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.190930 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.190940 | orchestrator | 2025-06-11 15:05:04.190949 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-11 15:05:04.190964 | orchestrator | Wednesday 11 June 2025 15:01:17 +0000 (0:00:03.721) 0:01:05.471 ******** 2025-06-11 15:05:04.190974 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.190983 | orchestrator | 2025-06-11 15:05:04.190993 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-11 15:05:04.191002 | orchestrator | Wednesday 11 June 2025 15:01:17 +0000 (0:00:00.125) 0:01:05.596 ******** 2025-06-11 15:05:04.191029 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.191040 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.191049 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.191058 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191068 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191077 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191087 | orchestrator | 2025-06-11 15:05:04.191096 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-11 15:05:04.191106 | orchestrator | Wednesday 11 June 2025 15:01:18 +0000 (0:00:01.132) 0:01:06.728 ******** 2025-06-11 15:05:04.191120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.191130 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.191146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.191156 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.191166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.191176 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.191192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191203 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191223 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191252 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191262 | orchestrator | 2025-06-11 15:05:04.191272 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-11 15:05:04.191281 | orchestrator | Wednesday 11 June 2025 15:01:22 +0000 (0:00:04.362) 0:01:11.091 ******** 2025-06-11 15:05:04.191291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191369 | orchestrator | 2025-06-11 15:05:04.191379 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-11 15:05:04.191389 | orchestrator | Wednesday 11 June 2025 15:01:28 +0000 (0:00:05.587) 0:01:16.678 ******** 2025-06-11 15:05:04.191399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.191476 | orchestrator | 2025-06-11 15:05:04.191486 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-11 15:05:04.191496 | orchestrator | Wednesday 11 June 2025 15:01:35 +0000 (0:00:07.527) 0:01:24.206 ******** 2025-06-11 15:05:04.191512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191523 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191548 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191582 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191624 | orchestrator | 2025-06-11 15:05:04.191634 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-11 15:05:04.191644 | orchestrator | Wednesday 11 June 2025 15:01:39 +0000 (0:00:03.682) 0:01:27.889 ******** 2025-06-11 15:05:04.191653 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191663 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:04.191672 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:05:04.191682 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191691 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191701 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:05:04.191710 | orchestrator | 2025-06-11 15:05:04.191720 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-11 15:05:04.191730 | orchestrator | Wednesday 11 June 2025 15:01:42 +0000 (0:00:03.111) 0:01:31.001 ******** 2025-06-11 15:05:04.191743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191754 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191774 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.191794 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.191854 | orchestrator | 2025-06-11 15:05:04.191863 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-11 15:05:04.191873 | orchestrator | Wednesday 11 June 2025 15:01:47 +0000 (0:00:04.432) 0:01:35.433 ******** 2025-06-11 15:05:04.191882 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.191892 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.191901 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.191910 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191920 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.191929 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.191939 | orchestrator | 2025-06-11 15:05:04.191948 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-11 15:05:04.191958 | orchestrator | Wednesday 11 June 2025 15:01:49 +0000 (0:00:02.399) 0:01:37.833 ******** 2025-06-11 15:05:04.191967 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.191977 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.191986 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.191996 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192005 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192028 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192038 | orchestrator | 2025-06-11 15:05:04.192048 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-11 15:05:04.192058 | orchestrator | Wednesday 11 June 2025 15:01:51 +0000 (0:00:02.110) 0:01:39.943 ******** 2025-06-11 15:05:04.192067 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192077 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192086 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192096 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192110 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192120 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192129 | orchestrator | 2025-06-11 15:05:04.192139 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-11 15:05:04.192148 | orchestrator | Wednesday 11 June 2025 15:01:54 +0000 (0:00:02.716) 0:01:42.660 ******** 2025-06-11 15:05:04.192158 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192167 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192177 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192186 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192195 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192205 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192214 | orchestrator | 2025-06-11 15:05:04.192224 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-11 15:05:04.192233 | orchestrator | Wednesday 11 June 2025 15:01:56 +0000 (0:00:02.744) 0:01:45.405 ******** 2025-06-11 15:05:04.192243 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192252 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192262 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192271 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192281 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192290 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192300 | orchestrator | 2025-06-11 15:05:04.192314 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-11 15:05:04.192324 | orchestrator | Wednesday 11 June 2025 15:01:58 +0000 (0:00:01.922) 0:01:47.327 ******** 2025-06-11 15:05:04.192334 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192343 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192353 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192362 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192372 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192381 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192390 | orchestrator | 2025-06-11 15:05:04.192400 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-11 15:05:04.192410 | orchestrator | Wednesday 11 June 2025 15:02:01 +0000 (0:00:02.180) 0:01:49.507 ******** 2025-06-11 15:05:04.192419 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192429 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192438 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192448 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192457 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192467 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192476 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192485 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192495 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192504 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192514 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-11 15:05:04.192524 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192533 | orchestrator | 2025-06-11 15:05:04.192542 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-11 15:05:04.192556 | orchestrator | Wednesday 11 June 2025 15:02:04 +0000 (0:00:03.330) 0:01:52.837 ******** 2025-06-11 15:05:04.192566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192582 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192602 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192627 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192647 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192676 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192696 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192705 | orchestrator | 2025-06-11 15:05:04.192715 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-11 15:05:04.192724 | orchestrator | Wednesday 11 June 2025 15:02:09 +0000 (0:00:05.315) 0:01:58.153 ******** 2025-06-11 15:05:04.192734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192744 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192770 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.192795 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192819 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192838 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.192858 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192867 | orchestrator | 2025-06-11 15:05:04.192877 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-11 15:05:04.192887 | orchestrator | Wednesday 11 June 2025 15:02:13 +0000 (0:00:04.019) 0:02:02.173 ******** 2025-06-11 15:05:04.192896 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.192906 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192915 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.192925 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.192934 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.192948 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.192958 | orchestrator | 2025-06-11 15:05:04.192968 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-11 15:05:04.192977 | orchestrator | Wednesday 11 June 2025 15:02:17 +0000 (0:00:04.223) 0:02:06.396 ******** 2025-06-11 15:05:04.192987 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.192996 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193005 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193038 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:05:04.193048 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:05:04.193057 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:05:04.193067 | orchestrator | 2025-06-11 15:05:04.193076 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-11 15:05:04.193092 | orchestrator | Wednesday 11 June 2025 15:02:21 +0000 (0:00:03.783) 0:02:10.179 ******** 2025-06-11 15:05:04.193102 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193111 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193120 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193129 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193139 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193148 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193158 | orchestrator | 2025-06-11 15:05:04.193167 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-11 15:05:04.193177 | orchestrator | Wednesday 11 June 2025 15:02:24 +0000 (0:00:02.339) 0:02:12.519 ******** 2025-06-11 15:05:04.193186 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193196 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193205 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193214 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193224 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193233 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193242 | orchestrator | 2025-06-11 15:05:04.193252 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-11 15:05:04.193262 | orchestrator | Wednesday 11 June 2025 15:02:25 +0000 (0:00:01.555) 0:02:14.074 ******** 2025-06-11 15:05:04.193271 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193285 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193294 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193304 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193313 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193322 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193332 | orchestrator | 2025-06-11 15:05:04.193341 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-11 15:05:04.193351 | orchestrator | Wednesday 11 June 2025 15:02:28 +0000 (0:00:03.141) 0:02:17.217 ******** 2025-06-11 15:05:04.193360 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193370 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193379 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193388 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193398 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193407 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193417 | orchestrator | 2025-06-11 15:05:04.193426 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-11 15:05:04.193436 | orchestrator | Wednesday 11 June 2025 15:02:31 +0000 (0:00:02.606) 0:02:19.824 ******** 2025-06-11 15:05:04.193445 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193454 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193464 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193473 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193482 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193492 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193501 | orchestrator | 2025-06-11 15:05:04.193510 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-11 15:05:04.193520 | orchestrator | Wednesday 11 June 2025 15:02:33 +0000 (0:00:01.781) 0:02:21.605 ******** 2025-06-11 15:05:04.193529 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193539 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193548 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193557 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193567 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193576 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193586 | orchestrator | 2025-06-11 15:05:04.193595 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-11 15:05:04.193604 | orchestrator | Wednesday 11 June 2025 15:02:34 +0000 (0:00:01.481) 0:02:23.086 ******** 2025-06-11 15:05:04.193619 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193629 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193638 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193647 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193657 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193666 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193675 | orchestrator | 2025-06-11 15:05:04.193685 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-11 15:05:04.193695 | orchestrator | Wednesday 11 June 2025 15:02:36 +0000 (0:00:01.968) 0:02:25.055 ******** 2025-06-11 15:05:04.193704 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193713 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193723 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193732 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193741 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193751 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193760 | orchestrator | 2025-06-11 15:05:04.193770 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-11 15:05:04.193779 | orchestrator | Wednesday 11 June 2025 15:02:38 +0000 (0:00:01.732) 0:02:26.787 ******** 2025-06-11 15:05:04.193789 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193798 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.193808 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193817 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193832 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193843 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193852 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193862 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.193871 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193881 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.193890 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-11 15:05:04.193900 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.193909 | orchestrator | 2025-06-11 15:05:04.193919 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-11 15:05:04.193928 | orchestrator | Wednesday 11 June 2025 15:02:42 +0000 (0:00:03.782) 0:02:30.570 ******** 2025-06-11 15:05:04.193942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.193952 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.193962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.193977 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.193987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.193997 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.194007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.194057 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.194074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-11 15:05:04.194085 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.194099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-11 15:05:04.194115 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.194125 | orchestrator | 2025-06-11 15:05:04.194134 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-11 15:05:04.194144 | orchestrator | Wednesday 11 June 2025 15:02:46 +0000 (0:00:04.415) 0:02:34.986 ******** 2025-06-11 15:05:04.194154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.194164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.194180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.194191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-11 15:05:04.194207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.194223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-11 15:05:04.194233 | orchestrator | 2025-06-11 15:05:04.194243 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-11 15:05:04.194253 | orchestrator | Wednesday 11 June 2025 15:02:49 +0000 (0:00:03.042) 0:02:38.028 ******** 2025-06-11 15:05:04.194263 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:04.194272 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:04.194282 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:04.194291 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:05:04.194300 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:05:04.194310 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:05:04.194319 | orchestrator | 2025-06-11 15:05:04.194328 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-11 15:05:04.194338 | orchestrator | Wednesday 11 June 2025 15:02:50 +0000 (0:00:00.489) 0:02:38.518 ******** 2025-06-11 15:05:04.194347 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:04.194357 | orchestrator | 2025-06-11 15:05:04.194366 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-11 15:05:04.194375 | orchestrator | Wednesday 11 June 2025 15:02:52 +0000 (0:00:02.300) 0:02:40.818 ******** 2025-06-11 15:05:04.194385 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:04.194394 | orchestrator | 2025-06-11 15:05:04.194404 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-11 15:05:04.194413 | orchestrator | Wednesday 11 June 2025 15:02:54 +0000 (0:00:02.369) 0:02:43.188 ******** 2025-06-11 15:05:04.194422 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:04.194432 | orchestrator | 2025-06-11 15:05:04.194441 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194451 | orchestrator | Wednesday 11 June 2025 15:03:40 +0000 (0:00:46.208) 0:03:29.397 ******** 2025-06-11 15:05:04.194460 | orchestrator | 2025-06-11 15:05:04.194469 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194479 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.075) 0:03:29.472 ******** 2025-06-11 15:05:04.194488 | orchestrator | 2025-06-11 15:05:04.194498 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194512 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.162) 0:03:29.635 ******** 2025-06-11 15:05:04.194522 | orchestrator | 2025-06-11 15:05:04.194532 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194541 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.057) 0:03:29.692 ******** 2025-06-11 15:05:04.194550 | orchestrator | 2025-06-11 15:05:04.194560 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194569 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.058) 0:03:29.750 ******** 2025-06-11 15:05:04.194598 | orchestrator | 2025-06-11 15:05:04.194608 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-11 15:05:04.194618 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.057) 0:03:29.808 ******** 2025-06-11 15:05:04.194627 | orchestrator | 2025-06-11 15:05:04.194636 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-11 15:05:04.194646 | orchestrator | Wednesday 11 June 2025 15:03:41 +0000 (0:00:00.059) 0:03:29.867 ******** 2025-06-11 15:05:04.194655 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:04.194665 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:05:04.194674 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:05:04.194684 | orchestrator | 2025-06-11 15:05:04.194693 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-11 15:05:04.194703 | orchestrator | Wednesday 11 June 2025 15:04:04 +0000 (0:00:23.420) 0:03:53.288 ******** 2025-06-11 15:05:04.194712 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:05:04.194721 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:05:04.194731 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:05:04.194740 | orchestrator | 2025-06-11 15:05:04.194749 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:05:04.194759 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-11 15:05:04.194774 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-11 15:05:04.194784 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-11 15:05:04.194794 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-11 15:05:04.194804 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-11 15:05:04.194813 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-11 15:05:04.194823 | orchestrator | 2025-06-11 15:05:04.194832 | orchestrator | 2025-06-11 15:05:04.194841 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:05:04.194851 | orchestrator | Wednesday 11 June 2025 15:05:02 +0000 (0:00:57.951) 0:04:51.240 ******** 2025-06-11 15:05:04.194861 | orchestrator | =============================================================================== 2025-06-11 15:05:04.194870 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 57.95s 2025-06-11 15:05:04.194879 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.21s 2025-06-11 15:05:04.194889 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.42s 2025-06-11 15:05:04.194898 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.02s 2025-06-11 15:05:04.194907 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.53s 2025-06-11 15:05:04.194917 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.99s 2025-06-11 15:05:04.194926 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.59s 2025-06-11 15:05:04.194936 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 5.32s 2025-06-11 15:05:04.194945 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.73s 2025-06-11 15:05:04.194955 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.43s 2025-06-11 15:05:04.194964 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 4.42s 2025-06-11 15:05:04.194973 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.36s 2025-06-11 15:05:04.194988 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.22s 2025-06-11 15:05:04.194998 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.12s 2025-06-11 15:05:04.195007 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.02s 2025-06-11 15:05:04.195030 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.82s 2025-06-11 15:05:04.195039 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.78s 2025-06-11 15:05:04.195049 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.78s 2025-06-11 15:05:04.195058 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.78s 2025-06-11 15:05:04.195068 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.77s 2025-06-11 15:05:04.195083 | orchestrator | 2025-06-11 15:05:04 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:04.195093 | orchestrator | 2025-06-11 15:05:04 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:04.195103 | orchestrator | 2025-06-11 15:05:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:07.218746 | orchestrator | 2025-06-11 15:05:07 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:07.219377 | orchestrator | 2025-06-11 15:05:07 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:07.219982 | orchestrator | 2025-06-11 15:05:07 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:07.223258 | orchestrator | 2025-06-11 15:05:07 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:07.223511 | orchestrator | 2025-06-11 15:05:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:10.265134 | orchestrator | 2025-06-11 15:05:10 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:10.268086 | orchestrator | 2025-06-11 15:05:10 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:10.270166 | orchestrator | 2025-06-11 15:05:10 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:10.273289 | orchestrator | 2025-06-11 15:05:10 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:10.273324 | orchestrator | 2025-06-11 15:05:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:13.301552 | orchestrator | 2025-06-11 15:05:13 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:13.301651 | orchestrator | 2025-06-11 15:05:13 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:13.302367 | orchestrator | 2025-06-11 15:05:13 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:13.302878 | orchestrator | 2025-06-11 15:05:13 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:13.304179 | orchestrator | 2025-06-11 15:05:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:16.339559 | orchestrator | 2025-06-11 15:05:16 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:16.339807 | orchestrator | 2025-06-11 15:05:16 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:16.342740 | orchestrator | 2025-06-11 15:05:16 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:16.343764 | orchestrator | 2025-06-11 15:05:16 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:16.343847 | orchestrator | 2025-06-11 15:05:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:19.365607 | orchestrator | 2025-06-11 15:05:19 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:19.365723 | orchestrator | 2025-06-11 15:05:19 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:19.366106 | orchestrator | 2025-06-11 15:05:19 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:19.366422 | orchestrator | 2025-06-11 15:05:19 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:19.366444 | orchestrator | 2025-06-11 15:05:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:22.402235 | orchestrator | 2025-06-11 15:05:22 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:22.402642 | orchestrator | 2025-06-11 15:05:22 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:22.404827 | orchestrator | 2025-06-11 15:05:22 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:22.405359 | orchestrator | 2025-06-11 15:05:22 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:22.405386 | orchestrator | 2025-06-11 15:05:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:25.434374 | orchestrator | 2025-06-11 15:05:25 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:25.438611 | orchestrator | 2025-06-11 15:05:25 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:25.439031 | orchestrator | 2025-06-11 15:05:25 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:25.440205 | orchestrator | 2025-06-11 15:05:25 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state STARTED 2025-06-11 15:05:25.440234 | orchestrator | 2025-06-11 15:05:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:28.470282 | orchestrator | 2025-06-11 15:05:28 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:28.473554 | orchestrator | 2025-06-11 15:05:28 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:28.476067 | orchestrator | 2025-06-11 15:05:28 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:28.478745 | orchestrator | 2025-06-11 15:05:28 | INFO  | Task 015e6778-6b4f-4a2f-84ca-d92996ac09b3 is in state SUCCESS 2025-06-11 15:05:28.480531 | orchestrator | 2025-06-11 15:05:28.480564 | orchestrator | 2025-06-11 15:05:28.480576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:05:28.480588 | orchestrator | 2025-06-11 15:05:28.480600 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:05:28.480611 | orchestrator | Wednesday 11 June 2025 15:03:38 +0000 (0:00:00.284) 0:00:00.284 ******** 2025-06-11 15:05:28.480622 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:28.480634 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:28.480645 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:28.480656 | orchestrator | 2025-06-11 15:05:28.480667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:05:28.480678 | orchestrator | Wednesday 11 June 2025 15:03:39 +0000 (0:00:00.281) 0:00:00.566 ******** 2025-06-11 15:05:28.480689 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-11 15:05:28.480700 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-11 15:05:28.480711 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-11 15:05:28.480722 | orchestrator | 2025-06-11 15:05:28.480757 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-11 15:05:28.480768 | orchestrator | 2025-06-11 15:05:28.480790 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-11 15:05:28.480801 | orchestrator | Wednesday 11 June 2025 15:03:39 +0000 (0:00:00.334) 0:00:00.901 ******** 2025-06-11 15:05:28.480812 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:05:28.480823 | orchestrator | 2025-06-11 15:05:28.480834 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-11 15:05:28.480844 | orchestrator | Wednesday 11 June 2025 15:03:39 +0000 (0:00:00.459) 0:00:01.361 ******** 2025-06-11 15:05:28.480856 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-11 15:05:28.480866 | orchestrator | 2025-06-11 15:05:28.480877 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-11 15:05:28.480887 | orchestrator | Wednesday 11 June 2025 15:03:43 +0000 (0:00:03.215) 0:00:04.577 ******** 2025-06-11 15:05:28.480898 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-11 15:05:28.480909 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-11 15:05:28.480919 | orchestrator | 2025-06-11 15:05:28.480930 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-11 15:05:28.480941 | orchestrator | Wednesday 11 June 2025 15:03:49 +0000 (0:00:06.703) 0:00:11.280 ******** 2025-06-11 15:05:28.480952 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:05:28.480963 | orchestrator | 2025-06-11 15:05:28.480974 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-11 15:05:28.481021 | orchestrator | Wednesday 11 June 2025 15:03:52 +0000 (0:00:03.170) 0:00:14.450 ******** 2025-06-11 15:05:28.481033 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:05:28.481043 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-11 15:05:28.481054 | orchestrator | 2025-06-11 15:05:28.481065 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-11 15:05:28.481076 | orchestrator | Wednesday 11 June 2025 15:03:56 +0000 (0:00:03.638) 0:00:18.089 ******** 2025-06-11 15:05:28.481086 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:05:28.481097 | orchestrator | 2025-06-11 15:05:28.481107 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-11 15:05:28.481118 | orchestrator | Wednesday 11 June 2025 15:03:59 +0000 (0:00:03.332) 0:00:21.421 ******** 2025-06-11 15:05:28.481129 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-11 15:05:28.481142 | orchestrator | 2025-06-11 15:05:28.481154 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-11 15:05:28.481166 | orchestrator | Wednesday 11 June 2025 15:04:04 +0000 (0:00:04.183) 0:00:25.604 ******** 2025-06-11 15:05:28.481178 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.481190 | orchestrator | 2025-06-11 15:05:28.481202 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-11 15:05:28.481215 | orchestrator | Wednesday 11 June 2025 15:04:07 +0000 (0:00:03.315) 0:00:28.920 ******** 2025-06-11 15:05:28.481227 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.481239 | orchestrator | 2025-06-11 15:05:28.481251 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-11 15:05:28.481263 | orchestrator | Wednesday 11 June 2025 15:04:11 +0000 (0:00:03.739) 0:00:32.659 ******** 2025-06-11 15:05:28.481275 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.481287 | orchestrator | 2025-06-11 15:05:28.481300 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-11 15:05:28.481312 | orchestrator | Wednesday 11 June 2025 15:04:14 +0000 (0:00:03.750) 0:00:36.410 ******** 2025-06-11 15:05:28.481339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481450 | orchestrator | 2025-06-11 15:05:28.481463 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-11 15:05:28.481476 | orchestrator | Wednesday 11 June 2025 15:04:16 +0000 (0:00:01.724) 0:00:38.134 ******** 2025-06-11 15:05:28.481489 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.481500 | orchestrator | 2025-06-11 15:05:28.481511 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-11 15:05:28.481522 | orchestrator | Wednesday 11 June 2025 15:04:16 +0000 (0:00:00.147) 0:00:38.282 ******** 2025-06-11 15:05:28.481532 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.481543 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:28.481554 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:28.481564 | orchestrator | 2025-06-11 15:05:28.481575 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-11 15:05:28.481586 | orchestrator | Wednesday 11 June 2025 15:04:17 +0000 (0:00:00.477) 0:00:38.759 ******** 2025-06-11 15:05:28.481596 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:05:28.481607 | orchestrator | 2025-06-11 15:05:28.481622 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-11 15:05:28.481633 | orchestrator | Wednesday 11 June 2025 15:04:18 +0000 (0:00:00.990) 0:00:39.750 ******** 2025-06-11 15:05:28.481644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481733 | orchestrator | 2025-06-11 15:05:28.481744 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-11 15:05:28.481755 | orchestrator | Wednesday 11 June 2025 15:04:20 +0000 (0:00:02.499) 0:00:42.249 ******** 2025-06-11 15:05:28.481766 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:05:28.481777 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:05:28.481788 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:05:28.481798 | orchestrator | 2025-06-11 15:05:28.481809 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-11 15:05:28.481819 | orchestrator | Wednesday 11 June 2025 15:04:20 +0000 (0:00:00.292) 0:00:42.542 ******** 2025-06-11 15:05:28.481830 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:05:28.481841 | orchestrator | 2025-06-11 15:05:28.481851 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-11 15:05:28.481872 | orchestrator | Wednesday 11 June 2025 15:04:21 +0000 (0:00:00.677) 0:00:43.219 ******** 2025-06-11 15:05:28.481883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.481929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.481969 | orchestrator | 2025-06-11 15:05:28.481994 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-11 15:05:28.482006 | orchestrator | Wednesday 11 June 2025 15:04:23 +0000 (0:00:02.287) 0:00:45.507 ******** 2025-06-11 15:05:28.482068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482101 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.482113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482142 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:28.482153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482183 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:28.482194 | orchestrator | 2025-06-11 15:05:28.482205 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-11 15:05:28.482216 | orchestrator | Wednesday 11 June 2025 15:04:24 +0000 (0:00:00.607) 0:00:46.114 ******** 2025-06-11 15:05:28.482231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482266 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.482277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482300 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:28.482318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482346 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:28.482357 | orchestrator | 2025-06-11 15:05:28.482374 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-11 15:05:28.482385 | orchestrator | Wednesday 11 June 2025 15:04:25 +0000 (0:00:01.165) 0:00:47.279 ******** 2025-06-11 15:05:28.482396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482481 | orchestrator | 2025-06-11 15:05:28.482492 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-11 15:05:28.482502 | orchestrator | Wednesday 11 June 2025 15:04:28 +0000 (0:00:02.326) 0:00:49.606 ******** 2025-06-11 15:05:28.482514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482602 | orchestrator | 2025-06-11 15:05:28.482613 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-11 15:05:28.482623 | orchestrator | Wednesday 11 June 2025 15:04:33 +0000 (0:00:05.839) 0:00:55.445 ******** 2025-06-11 15:05:28.482640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482673 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.482685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482707 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:28.482718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-11 15:05:28.482735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:05:28.482746 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:28.482757 | orchestrator | 2025-06-11 15:05:28.482768 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-11 15:05:28.482779 | orchestrator | Wednesday 11 June 2025 15:04:34 +0000 (0:00:00.681) 0:00:56.127 ******** 2025-06-11 15:05:28.482799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-11 15:05:28.482835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:05:28.482886 | orchestrator | 2025-06-11 15:05:28.482897 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-11 15:05:28.482907 | orchestrator | Wednesday 11 June 2025 15:04:36 +0000 (0:00:01.821) 0:00:57.948 ******** 2025-06-11 15:05:28.482918 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:05:28.482929 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:05:28.482939 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:05:28.482950 | orchestrator | 2025-06-11 15:05:28.482960 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-11 15:05:28.482971 | orchestrator | Wednesday 11 June 2025 15:04:36 +0000 (0:00:00.279) 0:00:58.228 ******** 2025-06-11 15:05:28.483030 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.483042 | orchestrator | 2025-06-11 15:05:28.483053 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-11 15:05:28.483064 | orchestrator | Wednesday 11 June 2025 15:04:38 +0000 (0:00:02.276) 0:01:00.504 ******** 2025-06-11 15:05:28.483074 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.483085 | orchestrator | 2025-06-11 15:05:28.483096 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-11 15:05:28.483106 | orchestrator | Wednesday 11 June 2025 15:04:41 +0000 (0:00:02.227) 0:01:02.732 ******** 2025-06-11 15:05:28.483117 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.483127 | orchestrator | 2025-06-11 15:05:28.483138 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-11 15:05:28.483148 | orchestrator | Wednesday 11 June 2025 15:04:57 +0000 (0:00:16.504) 0:01:19.236 ******** 2025-06-11 15:05:28.483157 | orchestrator | 2025-06-11 15:05:28.483166 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-11 15:05:28.483176 | orchestrator | Wednesday 11 June 2025 15:04:57 +0000 (0:00:00.064) 0:01:19.301 ******** 2025-06-11 15:05:28.483185 | orchestrator | 2025-06-11 15:05:28.483194 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-11 15:05:28.483204 | orchestrator | Wednesday 11 June 2025 15:04:57 +0000 (0:00:00.062) 0:01:19.363 ******** 2025-06-11 15:05:28.483213 | orchestrator | 2025-06-11 15:05:28.483223 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-11 15:05:28.483232 | orchestrator | Wednesday 11 June 2025 15:04:57 +0000 (0:00:00.063) 0:01:19.427 ******** 2025-06-11 15:05:28.483241 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.483251 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:05:28.483260 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:05:28.483270 | orchestrator | 2025-06-11 15:05:28.483279 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-11 15:05:28.483288 | orchestrator | Wednesday 11 June 2025 15:05:12 +0000 (0:00:14.218) 0:01:33.646 ******** 2025-06-11 15:05:28.483298 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:05:28.483307 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:05:28.483317 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:05:28.483326 | orchestrator | 2025-06-11 15:05:28.483335 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:05:28.483351 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-11 15:05:28.483361 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 15:05:28.483371 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-11 15:05:28.483381 | orchestrator | 2025-06-11 15:05:28.483390 | orchestrator | 2025-06-11 15:05:28.483400 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:05:28.483409 | orchestrator | Wednesday 11 June 2025 15:05:28 +0000 (0:00:16.030) 0:01:49.676 ******** 2025-06-11 15:05:28.483419 | orchestrator | =============================================================================== 2025-06-11 15:05:28.483428 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.50s 2025-06-11 15:05:28.483443 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.03s 2025-06-11 15:05:28.483453 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.22s 2025-06-11 15:05:28.483463 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.70s 2025-06-11 15:05:28.483472 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.84s 2025-06-11 15:05:28.483482 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.18s 2025-06-11 15:05:28.483491 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.75s 2025-06-11 15:05:28.483500 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.74s 2025-06-11 15:05:28.483510 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.64s 2025-06-11 15:05:28.483519 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.33s 2025-06-11 15:05:28.483528 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2025-06-11 15:05:28.483542 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.22s 2025-06-11 15:05:28.483551 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.17s 2025-06-11 15:05:28.483561 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.50s 2025-06-11 15:05:28.483570 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.33s 2025-06-11 15:05:28.483580 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.29s 2025-06-11 15:05:28.483589 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.28s 2025-06-11 15:05:28.483598 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.23s 2025-06-11 15:05:28.483608 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.82s 2025-06-11 15:05:28.483617 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.72s 2025-06-11 15:05:28.483626 | orchestrator | 2025-06-11 15:05:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:31.525856 | orchestrator | 2025-06-11 15:05:31 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:31.526189 | orchestrator | 2025-06-11 15:05:31 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:31.527225 | orchestrator | 2025-06-11 15:05:31 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:31.528892 | orchestrator | 2025-06-11 15:05:31 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:31.528922 | orchestrator | 2025-06-11 15:05:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:34.565101 | orchestrator | 2025-06-11 15:05:34 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:34.565445 | orchestrator | 2025-06-11 15:05:34 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:34.566344 | orchestrator | 2025-06-11 15:05:34 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:34.567110 | orchestrator | 2025-06-11 15:05:34 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:34.567259 | orchestrator | 2025-06-11 15:05:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:37.602876 | orchestrator | 2025-06-11 15:05:37 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:37.603250 | orchestrator | 2025-06-11 15:05:37 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:37.604902 | orchestrator | 2025-06-11 15:05:37 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:37.605468 | orchestrator | 2025-06-11 15:05:37 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:37.605575 | orchestrator | 2025-06-11 15:05:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:40.634334 | orchestrator | 2025-06-11 15:05:40 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:40.634850 | orchestrator | 2025-06-11 15:05:40 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:40.637699 | orchestrator | 2025-06-11 15:05:40 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:40.638471 | orchestrator | 2025-06-11 15:05:40 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:40.641140 | orchestrator | 2025-06-11 15:05:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:43.669323 | orchestrator | 2025-06-11 15:05:43 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:43.669798 | orchestrator | 2025-06-11 15:05:43 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:43.671294 | orchestrator | 2025-06-11 15:05:43 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:43.672010 | orchestrator | 2025-06-11 15:05:43 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:43.672036 | orchestrator | 2025-06-11 15:05:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:46.698520 | orchestrator | 2025-06-11 15:05:46 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:46.698740 | orchestrator | 2025-06-11 15:05:46 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:46.699529 | orchestrator | 2025-06-11 15:05:46 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:46.701693 | orchestrator | 2025-06-11 15:05:46 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:46.701718 | orchestrator | 2025-06-11 15:05:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:49.732449 | orchestrator | 2025-06-11 15:05:49 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:49.732559 | orchestrator | 2025-06-11 15:05:49 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:49.733263 | orchestrator | 2025-06-11 15:05:49 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:49.734950 | orchestrator | 2025-06-11 15:05:49 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:49.735086 | orchestrator | 2025-06-11 15:05:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:52.768932 | orchestrator | 2025-06-11 15:05:52 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:52.771883 | orchestrator | 2025-06-11 15:05:52 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:52.775898 | orchestrator | 2025-06-11 15:05:52 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:52.776389 | orchestrator | 2025-06-11 15:05:52 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:52.776414 | orchestrator | 2025-06-11 15:05:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:55.804241 | orchestrator | 2025-06-11 15:05:55 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:55.804892 | orchestrator | 2025-06-11 15:05:55 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:55.805722 | orchestrator | 2025-06-11 15:05:55 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:55.806845 | orchestrator | 2025-06-11 15:05:55 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:55.806889 | orchestrator | 2025-06-11 15:05:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:05:58.845881 | orchestrator | 2025-06-11 15:05:58 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:05:58.846071 | orchestrator | 2025-06-11 15:05:58 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:05:58.846822 | orchestrator | 2025-06-11 15:05:58 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:05:58.847527 | orchestrator | 2025-06-11 15:05:58 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:05:58.850228 | orchestrator | 2025-06-11 15:05:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:01.879155 | orchestrator | 2025-06-11 15:06:01 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:01.881624 | orchestrator | 2025-06-11 15:06:01 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:01.883565 | orchestrator | 2025-06-11 15:06:01 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:01.885640 | orchestrator | 2025-06-11 15:06:01 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:01.885671 | orchestrator | 2025-06-11 15:06:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:04.930311 | orchestrator | 2025-06-11 15:06:04 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:04.932253 | orchestrator | 2025-06-11 15:06:04 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:04.934851 | orchestrator | 2025-06-11 15:06:04 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:04.936239 | orchestrator | 2025-06-11 15:06:04 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:04.936269 | orchestrator | 2025-06-11 15:06:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:07.991153 | orchestrator | 2025-06-11 15:06:07 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:07.991255 | orchestrator | 2025-06-11 15:06:07 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:07.993057 | orchestrator | 2025-06-11 15:06:07 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:07.996660 | orchestrator | 2025-06-11 15:06:07 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:07.996755 | orchestrator | 2025-06-11 15:06:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:11.035667 | orchestrator | 2025-06-11 15:06:11 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:11.036428 | orchestrator | 2025-06-11 15:06:11 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:11.038091 | orchestrator | 2025-06-11 15:06:11 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:11.038808 | orchestrator | 2025-06-11 15:06:11 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:11.039226 | orchestrator | 2025-06-11 15:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:14.090669 | orchestrator | 2025-06-11 15:06:14 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:14.093422 | orchestrator | 2025-06-11 15:06:14 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:14.094375 | orchestrator | 2025-06-11 15:06:14 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:14.096909 | orchestrator | 2025-06-11 15:06:14 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:14.096978 | orchestrator | 2025-06-11 15:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:17.140898 | orchestrator | 2025-06-11 15:06:17 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:17.142529 | orchestrator | 2025-06-11 15:06:17 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:17.144565 | orchestrator | 2025-06-11 15:06:17 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:17.146195 | orchestrator | 2025-06-11 15:06:17 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:17.146270 | orchestrator | 2025-06-11 15:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:20.201110 | orchestrator | 2025-06-11 15:06:20 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:20.202202 | orchestrator | 2025-06-11 15:06:20 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:20.206230 | orchestrator | 2025-06-11 15:06:20 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:20.207998 | orchestrator | 2025-06-11 15:06:20 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:20.208026 | orchestrator | 2025-06-11 15:06:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:23.245879 | orchestrator | 2025-06-11 15:06:23 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:23.247511 | orchestrator | 2025-06-11 15:06:23 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:23.250124 | orchestrator | 2025-06-11 15:06:23 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:23.251890 | orchestrator | 2025-06-11 15:06:23 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:23.251951 | orchestrator | 2025-06-11 15:06:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:26.300971 | orchestrator | 2025-06-11 15:06:26 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:26.302258 | orchestrator | 2025-06-11 15:06:26 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:26.304246 | orchestrator | 2025-06-11 15:06:26 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:26.306459 | orchestrator | 2025-06-11 15:06:26 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:26.306678 | orchestrator | 2025-06-11 15:06:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:29.353135 | orchestrator | 2025-06-11 15:06:29 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:29.355397 | orchestrator | 2025-06-11 15:06:29 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:29.356681 | orchestrator | 2025-06-11 15:06:29 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:29.358420 | orchestrator | 2025-06-11 15:06:29 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:29.358751 | orchestrator | 2025-06-11 15:06:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:32.403580 | orchestrator | 2025-06-11 15:06:32 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:32.405108 | orchestrator | 2025-06-11 15:06:32 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:32.407554 | orchestrator | 2025-06-11 15:06:32 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:32.409346 | orchestrator | 2025-06-11 15:06:32 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:32.409390 | orchestrator | 2025-06-11 15:06:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:35.459074 | orchestrator | 2025-06-11 15:06:35 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:35.460590 | orchestrator | 2025-06-11 15:06:35 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:35.462360 | orchestrator | 2025-06-11 15:06:35 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:35.464476 | orchestrator | 2025-06-11 15:06:35 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:35.464510 | orchestrator | 2025-06-11 15:06:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:38.509711 | orchestrator | 2025-06-11 15:06:38 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:38.511391 | orchestrator | 2025-06-11 15:06:38 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:38.513389 | orchestrator | 2025-06-11 15:06:38 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:38.515185 | orchestrator | 2025-06-11 15:06:38 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:38.515225 | orchestrator | 2025-06-11 15:06:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:41.562637 | orchestrator | 2025-06-11 15:06:41 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:41.564048 | orchestrator | 2025-06-11 15:06:41 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:41.567312 | orchestrator | 2025-06-11 15:06:41 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:41.570390 | orchestrator | 2025-06-11 15:06:41 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:41.570471 | orchestrator | 2025-06-11 15:06:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:44.618955 | orchestrator | 2025-06-11 15:06:44 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:44.619059 | orchestrator | 2025-06-11 15:06:44 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:44.619074 | orchestrator | 2025-06-11 15:06:44 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:44.619085 | orchestrator | 2025-06-11 15:06:44 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:44.619096 | orchestrator | 2025-06-11 15:06:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:47.650141 | orchestrator | 2025-06-11 15:06:47 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:47.650726 | orchestrator | 2025-06-11 15:06:47 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:47.652749 | orchestrator | 2025-06-11 15:06:47 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:47.653389 | orchestrator | 2025-06-11 15:06:47 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:47.653412 | orchestrator | 2025-06-11 15:06:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:50.682708 | orchestrator | 2025-06-11 15:06:50 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:50.682808 | orchestrator | 2025-06-11 15:06:50 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:50.682831 | orchestrator | 2025-06-11 15:06:50 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:50.682851 | orchestrator | 2025-06-11 15:06:50 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:50.682871 | orchestrator | 2025-06-11 15:06:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:53.720932 | orchestrator | 2025-06-11 15:06:53 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:53.721418 | orchestrator | 2025-06-11 15:06:53 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:53.722776 | orchestrator | 2025-06-11 15:06:53 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:53.723811 | orchestrator | 2025-06-11 15:06:53 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:53.724047 | orchestrator | 2025-06-11 15:06:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:56.767541 | orchestrator | 2025-06-11 15:06:56 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:56.769604 | orchestrator | 2025-06-11 15:06:56 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:56.773133 | orchestrator | 2025-06-11 15:06:56 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:56.773169 | orchestrator | 2025-06-11 15:06:56 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:56.773516 | orchestrator | 2025-06-11 15:06:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:06:59.815471 | orchestrator | 2025-06-11 15:06:59 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:06:59.817469 | orchestrator | 2025-06-11 15:06:59 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:06:59.820485 | orchestrator | 2025-06-11 15:06:59 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:06:59.822447 | orchestrator | 2025-06-11 15:06:59 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:06:59.823122 | orchestrator | 2025-06-11 15:06:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:02.873333 | orchestrator | 2025-06-11 15:07:02 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:02.874826 | orchestrator | 2025-06-11 15:07:02 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:07:02.875751 | orchestrator | 2025-06-11 15:07:02 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:02.877191 | orchestrator | 2025-06-11 15:07:02 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:02.877395 | orchestrator | 2025-06-11 15:07:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:05.937650 | orchestrator | 2025-06-11 15:07:05 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:05.938640 | orchestrator | 2025-06-11 15:07:05 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:07:05.939784 | orchestrator | 2025-06-11 15:07:05 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:05.941514 | orchestrator | 2025-06-11 15:07:05 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:05.941546 | orchestrator | 2025-06-11 15:07:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:08.988524 | orchestrator | 2025-06-11 15:07:08 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:08.989663 | orchestrator | 2025-06-11 15:07:08 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:07:08.990918 | orchestrator | 2025-06-11 15:07:08 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:08.992463 | orchestrator | 2025-06-11 15:07:08 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:08.992485 | orchestrator | 2025-06-11 15:07:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:12.031530 | orchestrator | 2025-06-11 15:07:12 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:12.031646 | orchestrator | 2025-06-11 15:07:12 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state STARTED 2025-06-11 15:07:12.033828 | orchestrator | 2025-06-11 15:07:12 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:12.034622 | orchestrator | 2025-06-11 15:07:12 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:12.034649 | orchestrator | 2025-06-11 15:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:15.092762 | orchestrator | 2025-06-11 15:07:15 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:15.095010 | orchestrator | 2025-06-11 15:07:15 | INFO  | Task 7c9919fa-01a1-49a6-9344-59d020ab82e5 is in state SUCCESS 2025-06-11 15:07:15.096193 | orchestrator | 2025-06-11 15:07:15.096274 | orchestrator | 2025-06-11 15:07:15.096290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:07:15.096303 | orchestrator | 2025-06-11 15:07:15.096313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:07:15.096323 | orchestrator | Wednesday 11 June 2025 15:04:32 +0000 (0:00:00.201) 0:00:00.201 ******** 2025-06-11 15:07:15.096334 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:07:15.096345 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:07:15.096380 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:07:15.096391 | orchestrator | 2025-06-11 15:07:15.096401 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:07:15.096411 | orchestrator | Wednesday 11 June 2025 15:04:32 +0000 (0:00:00.205) 0:00:00.407 ******** 2025-06-11 15:07:15.096420 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-11 15:07:15.096431 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-11 15:07:15.096441 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-11 15:07:15.096451 | orchestrator | 2025-06-11 15:07:15.096461 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-11 15:07:15.096470 | orchestrator | 2025-06-11 15:07:15.096505 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-11 15:07:15.096516 | orchestrator | Wednesday 11 June 2025 15:04:32 +0000 (0:00:00.326) 0:00:00.734 ******** 2025-06-11 15:07:15.096526 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:07:15.096537 | orchestrator | 2025-06-11 15:07:15.096546 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-11 15:07:15.096556 | orchestrator | Wednesday 11 June 2025 15:04:32 +0000 (0:00:00.394) 0:00:01.128 ******** 2025-06-11 15:07:15.096566 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-11 15:07:15.096575 | orchestrator | 2025-06-11 15:07:15.096585 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-11 15:07:15.096595 | orchestrator | Wednesday 11 June 2025 15:04:36 +0000 (0:00:03.436) 0:00:04.564 ******** 2025-06-11 15:07:15.096605 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-11 15:07:15.096615 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-11 15:07:15.096625 | orchestrator | 2025-06-11 15:07:15.096634 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-11 15:07:15.096644 | orchestrator | Wednesday 11 June 2025 15:04:42 +0000 (0:00:06.450) 0:00:11.015 ******** 2025-06-11 15:07:15.096654 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:07:15.096664 | orchestrator | 2025-06-11 15:07:15.096676 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-11 15:07:15.096693 | orchestrator | Wednesday 11 June 2025 15:04:46 +0000 (0:00:03.460) 0:00:14.476 ******** 2025-06-11 15:07:15.096709 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:07:15.096726 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-11 15:07:15.096742 | orchestrator | 2025-06-11 15:07:15.096757 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-11 15:07:15.096774 | orchestrator | Wednesday 11 June 2025 15:04:50 +0000 (0:00:03.938) 0:00:18.414 ******** 2025-06-11 15:07:15.096791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:07:15.096809 | orchestrator | 2025-06-11 15:07:15.096826 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-11 15:07:15.096843 | orchestrator | Wednesday 11 June 2025 15:04:53 +0000 (0:00:03.386) 0:00:21.800 ******** 2025-06-11 15:07:15.096889 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-11 15:07:15.096906 | orchestrator | 2025-06-11 15:07:15.096922 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-11 15:07:15.096938 | orchestrator | Wednesday 11 June 2025 15:04:57 +0000 (0:00:04.134) 0:00:25.935 ******** 2025-06-11 15:07:15.096997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097086 | orchestrator | 2025-06-11 15:07:15.097106 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-11 15:07:15.097124 | orchestrator | Wednesday 11 June 2025 15:05:02 +0000 (0:00:04.533) 0:00:30.469 ******** 2025-06-11 15:07:15.097149 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:07:15.097169 | orchestrator | 2025-06-11 15:07:15.097200 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-11 15:07:15.097220 | orchestrator | Wednesday 11 June 2025 15:05:02 +0000 (0:00:00.554) 0:00:31.023 ******** 2025-06-11 15:07:15.097239 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.097259 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:07:15.097271 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:07:15.097282 | orchestrator | 2025-06-11 15:07:15.097292 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-11 15:07:15.097303 | orchestrator | Wednesday 11 June 2025 15:05:06 +0000 (0:00:03.200) 0:00:34.224 ******** 2025-06-11 15:07:15.097314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097347 | orchestrator | 2025-06-11 15:07:15.097358 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-11 15:07:15.097372 | orchestrator | Wednesday 11 June 2025 15:05:07 +0000 (0:00:01.500) 0:00:35.725 ******** 2025-06-11 15:07:15.097390 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:15.097445 | orchestrator | 2025-06-11 15:07:15.097464 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-11 15:07:15.097483 | orchestrator | Wednesday 11 June 2025 15:05:08 +0000 (0:00:01.262) 0:00:36.987 ******** 2025-06-11 15:07:15.097499 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:07:15.097511 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:07:15.097521 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:07:15.097532 | orchestrator | 2025-06-11 15:07:15.097543 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-11 15:07:15.097553 | orchestrator | Wednesday 11 June 2025 15:05:09 +0000 (0:00:00.815) 0:00:37.803 ******** 2025-06-11 15:07:15.097564 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.097575 | orchestrator | 2025-06-11 15:07:15.097585 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-11 15:07:15.097596 | orchestrator | Wednesday 11 June 2025 15:05:09 +0000 (0:00:00.134) 0:00:37.937 ******** 2025-06-11 15:07:15.097606 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.097617 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.097628 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.097648 | orchestrator | 2025-06-11 15:07:15.097659 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-11 15:07:15.097670 | orchestrator | Wednesday 11 June 2025 15:05:10 +0000 (0:00:00.269) 0:00:38.207 ******** 2025-06-11 15:07:15.097680 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:07:15.097691 | orchestrator | 2025-06-11 15:07:15.097702 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-11 15:07:15.097713 | orchestrator | Wednesday 11 June 2025 15:05:10 +0000 (0:00:00.606) 0:00:38.814 ******** 2025-06-11 15:07:15.097740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.097787 | orchestrator | 2025-06-11 15:07:15.097799 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-11 15:07:15.097809 | orchestrator | Wednesday 11 June 2025 15:05:15 +0000 (0:00:05.316) 0:00:44.130 ******** 2025-06-11 15:07:15.097836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.097888 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.097903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.097923 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.097949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.097962 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.097974 | orchestrator | 2025-06-11 15:07:15.097985 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-11 15:07:15.097995 | orchestrator | Wednesday 11 June 2025 15:05:18 +0000 (0:00:02.802) 0:00:46.933 ******** 2025-06-11 15:07:15.098007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.098097 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.098157 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-11 15:07:15.098194 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098205 | orchestrator | 2025-06-11 15:07:15.098216 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-11 15:07:15.098227 | orchestrator | Wednesday 11 June 2025 15:05:21 +0000 (0:00:02.883) 0:00:49.816 ******** 2025-06-11 15:07:15.098250 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098263 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098273 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098284 | orchestrator | 2025-06-11 15:07:15.098295 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-11 15:07:15.098307 | orchestrator | Wednesday 11 June 2025 15:05:24 +0000 (0:00:03.129) 0:00:52.946 ******** 2025-06-11 15:07:15.098330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.098344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.098363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.098376 | orchestrator | 2025-06-11 15:07:15.098387 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-11 15:07:15.098398 | orchestrator | Wednesday 11 June 2025 15:05:28 +0000 (0:00:03.815) 0:00:56.762 ******** 2025-06-11 15:07:15.098421 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.098433 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:07:15.098444 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:07:15.098454 | orchestrator | 2025-06-11 15:07:15.098465 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-11 15:07:15.098476 | orchestrator | Wednesday 11 June 2025 15:05:36 +0000 (0:00:07.664) 0:01:04.426 ******** 2025-06-11 15:07:15.098487 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098498 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098514 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098525 | orchestrator | 2025-06-11 15:07:15.098536 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-11 15:07:15.098554 | orchestrator | Wednesday 11 June 2025 15:05:40 +0000 (0:00:04.289) 0:01:08.715 ******** 2025-06-11 15:07:15.098575 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098595 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098614 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098632 | orchestrator | 2025-06-11 15:07:15.098653 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-11 15:07:15.098685 | orchestrator | Wednesday 11 June 2025 15:05:46 +0000 (0:00:05.955) 0:01:14.671 ******** 2025-06-11 15:07:15.098705 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098720 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098731 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098742 | orchestrator | 2025-06-11 15:07:15.098753 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-11 15:07:15.098763 | orchestrator | Wednesday 11 June 2025 15:05:50 +0000 (0:00:04.019) 0:01:18.691 ******** 2025-06-11 15:07:15.098774 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098785 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098796 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098807 | orchestrator | 2025-06-11 15:07:15.098817 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-11 15:07:15.098828 | orchestrator | Wednesday 11 June 2025 15:05:54 +0000 (0:00:03.784) 0:01:22.476 ******** 2025-06-11 15:07:15.098839 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098917 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.098930 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.098941 | orchestrator | 2025-06-11 15:07:15.098952 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-11 15:07:15.098963 | orchestrator | Wednesday 11 June 2025 15:05:54 +0000 (0:00:00.391) 0:01:22.867 ******** 2025-06-11 15:07:15.098975 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-11 15:07:15.098986 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.098996 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-11 15:07:15.099007 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.099018 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-11 15:07:15.099028 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.099039 | orchestrator | 2025-06-11 15:07:15.099050 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-11 15:07:15.099061 | orchestrator | Wednesday 11 June 2025 15:05:57 +0000 (0:00:03.222) 0:01:26.090 ******** 2025-06-11 15:07:15.099073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.099112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.099127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-11 15:07:15.099139 | orchestrator | 2025-06-11 15:07:15.099150 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-11 15:07:15.099161 | orchestrator | Wednesday 11 June 2025 15:06:01 +0000 (0:00:03.293) 0:01:29.384 ******** 2025-06-11 15:07:15.099171 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:15.099182 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:15.099193 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:15.099203 | orchestrator | 2025-06-11 15:07:15.099214 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-11 15:07:15.099225 | orchestrator | Wednesday 11 June 2025 15:06:01 +0000 (0:00:00.256) 0:01:29.641 ******** 2025-06-11 15:07:15.099242 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099253 | orchestrator | 2025-06-11 15:07:15.099263 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-11 15:07:15.099273 | orchestrator | Wednesday 11 June 2025 15:06:03 +0000 (0:00:02.304) 0:01:31.945 ******** 2025-06-11 15:07:15.099282 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099292 | orchestrator | 2025-06-11 15:07:15.099301 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-11 15:07:15.099311 | orchestrator | Wednesday 11 June 2025 15:06:06 +0000 (0:00:02.546) 0:01:34.491 ******** 2025-06-11 15:07:15.099320 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099329 | orchestrator | 2025-06-11 15:07:15.099339 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-11 15:07:15.099349 | orchestrator | Wednesday 11 June 2025 15:06:08 +0000 (0:00:02.300) 0:01:36.792 ******** 2025-06-11 15:07:15.099358 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099367 | orchestrator | 2025-06-11 15:07:15.099377 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-11 15:07:15.099391 | orchestrator | Wednesday 11 June 2025 15:06:37 +0000 (0:00:29.014) 0:02:05.807 ******** 2025-06-11 15:07:15.099402 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099411 | orchestrator | 2025-06-11 15:07:15.099427 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-11 15:07:15.099437 | orchestrator | Wednesday 11 June 2025 15:06:40 +0000 (0:00:02.492) 0:02:08.299 ******** 2025-06-11 15:07:15.099447 | orchestrator | 2025-06-11 15:07:15.099456 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-11 15:07:15.099466 | orchestrator | Wednesday 11 June 2025 15:06:40 +0000 (0:00:00.063) 0:02:08.363 ******** 2025-06-11 15:07:15.099476 | orchestrator | 2025-06-11 15:07:15.099486 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-11 15:07:15.099495 | orchestrator | Wednesday 11 June 2025 15:06:40 +0000 (0:00:00.062) 0:02:08.425 ******** 2025-06-11 15:07:15.099505 | orchestrator | 2025-06-11 15:07:15.099514 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-11 15:07:15.099524 | orchestrator | Wednesday 11 June 2025 15:06:40 +0000 (0:00:00.065) 0:02:08.491 ******** 2025-06-11 15:07:15.099533 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:15.099543 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:07:15.099553 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:07:15.099562 | orchestrator | 2025-06-11 15:07:15.099572 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:07:15.099583 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-11 15:07:15.099595 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:07:15.099604 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:07:15.099614 | orchestrator | 2025-06-11 15:07:15.099623 | orchestrator | 2025-06-11 15:07:15.099633 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:07:15.099642 | orchestrator | Wednesday 11 June 2025 15:07:11 +0000 (0:00:31.343) 0:02:39.834 ******** 2025-06-11 15:07:15.099652 | orchestrator | =============================================================================== 2025-06-11 15:07:15.099661 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.34s 2025-06-11 15:07:15.099671 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.01s 2025-06-11 15:07:15.099680 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.66s 2025-06-11 15:07:15.099690 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.45s 2025-06-11 15:07:15.099705 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.96s 2025-06-11 15:07:15.099715 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.32s 2025-06-11 15:07:15.099725 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.53s 2025-06-11 15:07:15.099735 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.29s 2025-06-11 15:07:15.099744 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.14s 2025-06-11 15:07:15.099754 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.02s 2025-06-11 15:07:15.099763 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.94s 2025-06-11 15:07:15.099772 | orchestrator | glance : Copying over config.json files for services -------------------- 3.82s 2025-06-11 15:07:15.099782 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.78s 2025-06-11 15:07:15.099792 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.46s 2025-06-11 15:07:15.099801 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.44s 2025-06-11 15:07:15.099810 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.39s 2025-06-11 15:07:15.099820 | orchestrator | glance : Check glance containers ---------------------------------------- 3.29s 2025-06-11 15:07:15.099829 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.22s 2025-06-11 15:07:15.099839 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.20s 2025-06-11 15:07:15.099864 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.13s 2025-06-11 15:07:15.099875 | orchestrator | 2025-06-11 15:07:15 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:15.100509 | orchestrator | 2025-06-11 15:07:15 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:15.106379 | orchestrator | 2025-06-11 15:07:15 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:15.106424 | orchestrator | 2025-06-11 15:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:18.140718 | orchestrator | 2025-06-11 15:07:18 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:18.141819 | orchestrator | 2025-06-11 15:07:18 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:18.144446 | orchestrator | 2025-06-11 15:07:18 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:18.146955 | orchestrator | 2025-06-11 15:07:18 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:18.146965 | orchestrator | 2025-06-11 15:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:21.193210 | orchestrator | 2025-06-11 15:07:21 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:21.194547 | orchestrator | 2025-06-11 15:07:21 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:21.196540 | orchestrator | 2025-06-11 15:07:21 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:21.198504 | orchestrator | 2025-06-11 15:07:21 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:21.199616 | orchestrator | 2025-06-11 15:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:24.252708 | orchestrator | 2025-06-11 15:07:24 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:24.260550 | orchestrator | 2025-06-11 15:07:24 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:24.262766 | orchestrator | 2025-06-11 15:07:24 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:24.265745 | orchestrator | 2025-06-11 15:07:24 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:24.266195 | orchestrator | 2025-06-11 15:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:27.321721 | orchestrator | 2025-06-11 15:07:27 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:27.325324 | orchestrator | 2025-06-11 15:07:27 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:27.327319 | orchestrator | 2025-06-11 15:07:27 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:27.329394 | orchestrator | 2025-06-11 15:07:27 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:27.329421 | orchestrator | 2025-06-11 15:07:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:30.377920 | orchestrator | 2025-06-11 15:07:30 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:30.378091 | orchestrator | 2025-06-11 15:07:30 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:30.379435 | orchestrator | 2025-06-11 15:07:30 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:30.381524 | orchestrator | 2025-06-11 15:07:30 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:30.381552 | orchestrator | 2025-06-11 15:07:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:33.423812 | orchestrator | 2025-06-11 15:07:33 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:33.424218 | orchestrator | 2025-06-11 15:07:33 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:33.425181 | orchestrator | 2025-06-11 15:07:33 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:33.426271 | orchestrator | 2025-06-11 15:07:33 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:33.426294 | orchestrator | 2025-06-11 15:07:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:36.475430 | orchestrator | 2025-06-11 15:07:36 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:36.475541 | orchestrator | 2025-06-11 15:07:36 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:36.475557 | orchestrator | 2025-06-11 15:07:36 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:36.475569 | orchestrator | 2025-06-11 15:07:36 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:36.475580 | orchestrator | 2025-06-11 15:07:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:39.536970 | orchestrator | 2025-06-11 15:07:39 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:39.538373 | orchestrator | 2025-06-11 15:07:39 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:39.540598 | orchestrator | 2025-06-11 15:07:39 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:39.542893 | orchestrator | 2025-06-11 15:07:39 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state STARTED 2025-06-11 15:07:39.542929 | orchestrator | 2025-06-11 15:07:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:42.584938 | orchestrator | 2025-06-11 15:07:42 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:42.585487 | orchestrator | 2025-06-11 15:07:42 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:42.588259 | orchestrator | 2025-06-11 15:07:42 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:42.593398 | orchestrator | 2025-06-11 15:07:42 | INFO  | Task 1103ac24-a156-4fc1-9a86-b7a1e6b43ce2 is in state SUCCESS 2025-06-11 15:07:42.593706 | orchestrator | 2025-06-11 15:07:42.596516 | orchestrator | 2025-06-11 15:07:42.596579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:07:42.596602 | orchestrator | 2025-06-11 15:07:42.596614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:07:42.596626 | orchestrator | Wednesday 11 June 2025 15:04:38 +0000 (0:00:00.259) 0:00:00.259 ******** 2025-06-11 15:07:42.596637 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:07:42.596649 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:07:42.596660 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:07:42.596671 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:07:42.596681 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:07:42.596692 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:07:42.596703 | orchestrator | 2025-06-11 15:07:42.596714 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:07:42.596724 | orchestrator | Wednesday 11 June 2025 15:04:39 +0000 (0:00:00.668) 0:00:00.927 ******** 2025-06-11 15:07:42.596735 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-11 15:07:42.596747 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-11 15:07:42.596757 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-11 15:07:42.596768 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-11 15:07:42.596778 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-11 15:07:42.596789 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-11 15:07:42.596800 | orchestrator | 2025-06-11 15:07:42.596810 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-11 15:07:42.596872 | orchestrator | 2025-06-11 15:07:42.596885 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-11 15:07:42.596896 | orchestrator | Wednesday 11 June 2025 15:04:39 +0000 (0:00:00.570) 0:00:01.497 ******** 2025-06-11 15:07:42.596908 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:07:42.596920 | orchestrator | 2025-06-11 15:07:42.596931 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-11 15:07:42.596942 | orchestrator | Wednesday 11 June 2025 15:04:40 +0000 (0:00:01.132) 0:00:02.629 ******** 2025-06-11 15:07:42.596953 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-11 15:07:42.596964 | orchestrator | 2025-06-11 15:07:42.596974 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-11 15:07:42.596985 | orchestrator | Wednesday 11 June 2025 15:04:44 +0000 (0:00:03.519) 0:00:06.149 ******** 2025-06-11 15:07:42.596996 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-11 15:07:42.597007 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-11 15:07:42.597018 | orchestrator | 2025-06-11 15:07:42.597028 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-11 15:07:42.597039 | orchestrator | Wednesday 11 June 2025 15:04:50 +0000 (0:00:06.523) 0:00:12.672 ******** 2025-06-11 15:07:42.597050 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:07:42.597060 | orchestrator | 2025-06-11 15:07:42.597071 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-11 15:07:42.597082 | orchestrator | Wednesday 11 June 2025 15:04:54 +0000 (0:00:03.474) 0:00:16.147 ******** 2025-06-11 15:07:42.597113 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:07:42.597124 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-11 15:07:42.597135 | orchestrator | 2025-06-11 15:07:42.597146 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-11 15:07:42.597157 | orchestrator | Wednesday 11 June 2025 15:04:58 +0000 (0:00:04.139) 0:00:20.286 ******** 2025-06-11 15:07:42.597167 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:07:42.597178 | orchestrator | 2025-06-11 15:07:42.597188 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-11 15:07:42.597199 | orchestrator | Wednesday 11 June 2025 15:05:02 +0000 (0:00:03.730) 0:00:24.017 ******** 2025-06-11 15:07:42.597209 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-11 15:07:42.597220 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-11 15:07:42.597230 | orchestrator | 2025-06-11 15:07:42.597241 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-11 15:07:42.597251 | orchestrator | Wednesday 11 June 2025 15:05:10 +0000 (0:00:07.919) 0:00:31.937 ******** 2025-06-11 15:07:42.597281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.597319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.597332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.597344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597448 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.597488 | orchestrator | 2025-06-11 15:07:42.597505 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-11 15:07:42.597517 | orchestrator | Wednesday 11 June 2025 15:05:12 +0000 (0:00:02.217) 0:00:34.154 ******** 2025-06-11 15:07:42.597528 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.597539 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.597550 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.597560 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.597571 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.597581 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.597600 | orchestrator | 2025-06-11 15:07:42.597618 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-11 15:07:42.597636 | orchestrator | Wednesday 11 June 2025 15:05:13 +0000 (0:00:00.776) 0:00:34.930 ******** 2025-06-11 15:07:42.597654 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.597673 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.597692 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.597707 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:07:42.597718 | orchestrator | 2025-06-11 15:07:42.597729 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-11 15:07:42.597739 | orchestrator | Wednesday 11 June 2025 15:05:14 +0000 (0:00:01.093) 0:00:36.024 ******** 2025-06-11 15:07:42.597750 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-11 15:07:42.597770 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-11 15:07:42.597781 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-11 15:07:42.597792 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-11 15:07:42.597802 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-11 15:07:42.597813 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-11 15:07:42.597852 | orchestrator | 2025-06-11 15:07:42.597864 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-11 15:07:42.597875 | orchestrator | Wednesday 11 June 2025 15:05:16 +0000 (0:00:02.306) 0:00:38.331 ******** 2025-06-11 15:07:42.597887 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.597900 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.597912 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.597932 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.597945 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.597964 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-11 15:07:42.598005 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598077 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598117 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598150 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598170 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598190 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-11 15:07:42.598209 | orchestrator | 2025-06-11 15:07:42.598227 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-11 15:07:42.598247 | orchestrator | Wednesday 11 June 2025 15:05:19 +0000 (0:00:03.264) 0:00:41.596 ******** 2025-06-11 15:07:42.598259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:42.598271 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:42.598289 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-11 15:07:42.598299 | orchestrator | 2025-06-11 15:07:42.598310 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-11 15:07:42.598321 | orchestrator | Wednesday 11 June 2025 15:05:21 +0000 (0:00:01.896) 0:00:43.492 ******** 2025-06-11 15:07:42.598332 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-11 15:07:42.598342 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-11 15:07:42.598353 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-11 15:07:42.598363 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 15:07:42.598374 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 15:07:42.598392 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-11 15:07:42.598412 | orchestrator | 2025-06-11 15:07:42.598423 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-11 15:07:42.598433 | orchestrator | Wednesday 11 June 2025 15:05:24 +0000 (0:00:02.811) 0:00:46.304 ******** 2025-06-11 15:07:42.598444 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-11 15:07:42.598455 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-11 15:07:42.598466 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-11 15:07:42.598476 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-11 15:07:42.598487 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-11 15:07:42.598498 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-11 15:07:42.598508 | orchestrator | 2025-06-11 15:07:42.598519 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-11 15:07:42.598530 | orchestrator | Wednesday 11 June 2025 15:05:25 +0000 (0:00:01.025) 0:00:47.329 ******** 2025-06-11 15:07:42.598540 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.598551 | orchestrator | 2025-06-11 15:07:42.598562 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-11 15:07:42.598572 | orchestrator | Wednesday 11 June 2025 15:05:25 +0000 (0:00:00.143) 0:00:47.473 ******** 2025-06-11 15:07:42.598583 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.598593 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.598604 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.598614 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.598625 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.598635 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.598646 | orchestrator | 2025-06-11 15:07:42.598657 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-11 15:07:42.598667 | orchestrator | Wednesday 11 June 2025 15:05:26 +0000 (0:00:00.598) 0:00:48.071 ******** 2025-06-11 15:07:42.598679 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:07:42.598691 | orchestrator | 2025-06-11 15:07:42.598701 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-11 15:07:42.598718 | orchestrator | Wednesday 11 June 2025 15:05:27 +0000 (0:00:01.077) 0:00:49.148 ******** 2025-06-11 15:07:42.598737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.598757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.598811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.598858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.598989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599000 | orchestrator | 2025-06-11 15:07:42.599011 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-11 15:07:42.599022 | orchestrator | Wednesday 11 June 2025 15:05:30 +0000 (0:00:02.849) 0:00:51.998 ******** 2025-06-11 15:07:42.599038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599097 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.599108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599172 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.599183 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.599194 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.599205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599228 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.599239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599273 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.599284 | orchestrator | 2025-06-11 15:07:42.599295 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-11 15:07:42.599306 | orchestrator | Wednesday 11 June 2025 15:05:32 +0000 (0:00:02.353) 0:00:54.351 ******** 2025-06-11 15:07:42.599324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599384 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.599403 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.599419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.599438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599450 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.599462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599484 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.599503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599548 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.599559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.599570 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.599581 | orchestrator | 2025-06-11 15:07:42.599592 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-11 15:07:42.599603 | orchestrator | Wednesday 11 June 2025 15:05:34 +0000 (0:00:01.908) 0:00:56.259 ******** 2025-06-11 15:07:42.599614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.599632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.599652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.599670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.599800 | orchestrator | 2025-06-11 15:07:42.599811 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-11 15:07:42.599850 | orchestrator | Wednesday 11 June 2025 15:05:37 +0000 (0:00:03.052) 0:00:59.312 ******** 2025-06-11 15:07:42.599869 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-11 15:07:42.599887 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-11 15:07:42.599905 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.599932 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-11 15:07:42.599960 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.599987 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-11 15:07:42.600006 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.600026 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-11 15:07:42.600045 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-11 15:07:42.600062 | orchestrator | 2025-06-11 15:07:42.600078 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-11 15:07:42.600088 | orchestrator | Wednesday 11 June 2025 15:05:39 +0000 (0:00:02.398) 0:01:01.710 ******** 2025-06-11 15:07:42.600107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.600134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.600155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.600215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.600366 | orchestrator | 2025-06-11 15:07:42.600377 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-11 15:07:42.600388 | orchestrator | Wednesday 11 June 2025 15:05:49 +0000 (0:00:09.835) 0:01:11.545 ******** 2025-06-11 15:07:42.600405 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.600416 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.600427 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.600437 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:07:42.600448 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:07:42.600459 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:07:42.600476 | orchestrator | 2025-06-11 15:07:42.600488 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-11 15:07:42.600498 | orchestrator | Wednesday 11 June 2025 15:05:51 +0000 (0:00:02.281) 0:01:13.826 ******** 2025-06-11 15:07:42.600510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.600521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600533 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.600544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.600555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600567 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.600599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-11 15:07:42.600623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600635 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.600646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600681 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.600696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600714 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.600733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-11 15:07:42.600756 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.600767 | orchestrator | 2025-06-11 15:07:42.600777 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-11 15:07:42.600788 | orchestrator | Wednesday 11 June 2025 15:05:53 +0000 (0:00:01.222) 0:01:15.049 ******** 2025-06-11 15:07:42.600799 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.600810 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.600852 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.600863 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.600874 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.600885 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.600896 | orchestrator | 2025-06-11 15:07:42.600907 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-11 15:07:42.600917 | orchestrator | Wednesday 11 June 2025 15:05:54 +0000 (0:00:00.982) 0:01:16.032 ******** 2025-06-11 15:07:42.600929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.600945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.601044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-11 15:07:42.601060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-11 15:07:42.601184 | orchestrator | 2025-06-11 15:07:42.601202 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-11 15:07:42.601213 | orchestrator | Wednesday 11 June 2025 15:05:56 +0000 (0:00:02.294) 0:01:18.326 ******** 2025-06-11 15:07:42.601224 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.601235 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:07:42.601245 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:07:42.601256 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:07:42.601266 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:07:42.601277 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:07:42.601288 | orchestrator | 2025-06-11 15:07:42.601298 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-11 15:07:42.601314 | orchestrator | Wednesday 11 June 2025 15:05:57 +0000 (0:00:00.740) 0:01:19.067 ******** 2025-06-11 15:07:42.601325 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:42.601335 | orchestrator | 2025-06-11 15:07:42.601346 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-11 15:07:42.601357 | orchestrator | Wednesday 11 June 2025 15:05:59 +0000 (0:00:02.286) 0:01:21.354 ******** 2025-06-11 15:07:42.601367 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:42.601378 | orchestrator | 2025-06-11 15:07:42.601388 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-11 15:07:42.601399 | orchestrator | Wednesday 11 June 2025 15:06:02 +0000 (0:00:02.547) 0:01:23.902 ******** 2025-06-11 15:07:42.601410 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:42.601420 | orchestrator | 2025-06-11 15:07:42.601431 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601442 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:21.414) 0:01:45.316 ******** 2025-06-11 15:07:42.601453 | orchestrator | 2025-06-11 15:07:42.601469 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601480 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.063) 0:01:45.380 ******** 2025-06-11 15:07:42.601490 | orchestrator | 2025-06-11 15:07:42.601501 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601512 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.064) 0:01:45.444 ******** 2025-06-11 15:07:42.601522 | orchestrator | 2025-06-11 15:07:42.601533 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601544 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.063) 0:01:45.507 ******** 2025-06-11 15:07:42.601554 | orchestrator | 2025-06-11 15:07:42.601565 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601576 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.063) 0:01:45.571 ******** 2025-06-11 15:07:42.601586 | orchestrator | 2025-06-11 15:07:42.601597 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-11 15:07:42.601607 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.064) 0:01:45.635 ******** 2025-06-11 15:07:42.601618 | orchestrator | 2025-06-11 15:07:42.601629 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-11 15:07:42.601639 | orchestrator | Wednesday 11 June 2025 15:06:23 +0000 (0:00:00.060) 0:01:45.695 ******** 2025-06-11 15:07:42.601650 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:42.601661 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:07:42.601671 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:07:42.601682 | orchestrator | 2025-06-11 15:07:42.601693 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-11 15:07:42.601703 | orchestrator | Wednesday 11 June 2025 15:06:47 +0000 (0:00:23.881) 0:02:09.577 ******** 2025-06-11 15:07:42.601714 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:07:42.601724 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:07:42.601735 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:07:42.601746 | orchestrator | 2025-06-11 15:07:42.601756 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-11 15:07:42.601773 | orchestrator | Wednesday 11 June 2025 15:06:58 +0000 (0:00:10.709) 0:02:20.286 ******** 2025-06-11 15:07:42.601784 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:07:42.601795 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:07:42.601805 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:07:42.601834 | orchestrator | 2025-06-11 15:07:42.601846 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-11 15:07:42.601857 | orchestrator | Wednesday 11 June 2025 15:07:35 +0000 (0:00:37.005) 0:02:57.292 ******** 2025-06-11 15:07:42.601868 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:07:42.601878 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:07:42.601889 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:07:42.601900 | orchestrator | 2025-06-11 15:07:42.601910 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-11 15:07:42.601921 | orchestrator | Wednesday 11 June 2025 15:07:41 +0000 (0:00:05.607) 0:03:02.899 ******** 2025-06-11 15:07:42.601932 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:07:42.601943 | orchestrator | 2025-06-11 15:07:42.601954 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:07:42.601965 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-11 15:07:42.601976 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-11 15:07:42.601987 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-11 15:07:42.601998 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-11 15:07:42.602009 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-11 15:07:42.602052 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-11 15:07:42.602063 | orchestrator | 2025-06-11 15:07:42.602074 | orchestrator | 2025-06-11 15:07:42.602085 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:07:42.602096 | orchestrator | Wednesday 11 June 2025 15:07:41 +0000 (0:00:00.582) 0:03:03.481 ******** 2025-06-11 15:07:42.602106 | orchestrator | =============================================================================== 2025-06-11 15:07:42.602123 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 37.01s 2025-06-11 15:07:42.602134 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.88s 2025-06-11 15:07:42.602144 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.41s 2025-06-11 15:07:42.602155 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.71s 2025-06-11 15:07:42.602165 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.84s 2025-06-11 15:07:42.602176 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.92s 2025-06-11 15:07:42.602186 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.52s 2025-06-11 15:07:42.602197 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.61s 2025-06-11 15:07:42.602215 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.14s 2025-06-11 15:07:42.602226 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.73s 2025-06-11 15:07:42.602236 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.52s 2025-06-11 15:07:42.602247 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.47s 2025-06-11 15:07:42.602257 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.26s 2025-06-11 15:07:42.602275 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.05s 2025-06-11 15:07:42.602285 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.85s 2025-06-11 15:07:42.602296 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.81s 2025-06-11 15:07:42.602306 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.55s 2025-06-11 15:07:42.602317 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.40s 2025-06-11 15:07:42.602328 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 2.35s 2025-06-11 15:07:42.602338 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.31s 2025-06-11 15:07:42.602349 | orchestrator | 2025-06-11 15:07:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:45.640771 | orchestrator | 2025-06-11 15:07:45 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:45.641150 | orchestrator | 2025-06-11 15:07:45 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:45.641783 | orchestrator | 2025-06-11 15:07:45 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:45.644000 | orchestrator | 2025-06-11 15:07:45 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:07:45.644023 | orchestrator | 2025-06-11 15:07:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:48.678263 | orchestrator | 2025-06-11 15:07:48 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:48.679900 | orchestrator | 2025-06-11 15:07:48 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:48.682139 | orchestrator | 2025-06-11 15:07:48 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:48.683393 | orchestrator | 2025-06-11 15:07:48 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:07:48.683426 | orchestrator | 2025-06-11 15:07:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:51.731015 | orchestrator | 2025-06-11 15:07:51 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:51.733738 | orchestrator | 2025-06-11 15:07:51 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:51.736361 | orchestrator | 2025-06-11 15:07:51 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:51.740035 | orchestrator | 2025-06-11 15:07:51 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:07:51.740308 | orchestrator | 2025-06-11 15:07:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:54.776981 | orchestrator | 2025-06-11 15:07:54 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:54.777722 | orchestrator | 2025-06-11 15:07:54 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:54.779787 | orchestrator | 2025-06-11 15:07:54 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:54.781247 | orchestrator | 2025-06-11 15:07:54 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:07:54.781274 | orchestrator | 2025-06-11 15:07:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:07:57.824877 | orchestrator | 2025-06-11 15:07:57 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:07:57.824952 | orchestrator | 2025-06-11 15:07:57 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state STARTED 2025-06-11 15:07:57.825828 | orchestrator | 2025-06-11 15:07:57 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:07:57.825920 | orchestrator | 2025-06-11 15:07:57 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:07:57.826107 | orchestrator | 2025-06-11 15:07:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:00.866395 | orchestrator | 2025-06-11 15:08:00 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:00.870296 | orchestrator | 2025-06-11 15:08:00 | INFO  | Task 6a2af4f7-d4f1-44b3-a078-8c4c4c836f86 is in state SUCCESS 2025-06-11 15:08:00.873045 | orchestrator | 2025-06-11 15:08:00.873085 | orchestrator | 2025-06-11 15:08:00.873092 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:08:00.873099 | orchestrator | 2025-06-11 15:08:00.873106 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:08:00.873112 | orchestrator | Wednesday 11 June 2025 15:05:34 +0000 (0:00:00.295) 0:00:00.295 ******** 2025-06-11 15:08:00.873118 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:08:00.873126 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:08:00.873132 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:08:00.873138 | orchestrator | 2025-06-11 15:08:00.873144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:08:00.873150 | orchestrator | Wednesday 11 June 2025 15:05:34 +0000 (0:00:00.250) 0:00:00.545 ******** 2025-06-11 15:08:00.873155 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-11 15:08:00.873162 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-11 15:08:00.873167 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-11 15:08:00.873173 | orchestrator | 2025-06-11 15:08:00.873179 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-11 15:08:00.873185 | orchestrator | 2025-06-11 15:08:00.873191 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-11 15:08:00.873196 | orchestrator | Wednesday 11 June 2025 15:05:34 +0000 (0:00:00.482) 0:00:01.028 ******** 2025-06-11 15:08:00.873202 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:08:00.873209 | orchestrator | 2025-06-11 15:08:00.873215 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-11 15:08:00.873220 | orchestrator | Wednesday 11 June 2025 15:05:35 +0000 (0:00:00.712) 0:00:01.741 ******** 2025-06-11 15:08:00.873229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873269 | orchestrator | 2025-06-11 15:08:00.873276 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-11 15:08:00.873282 | orchestrator | Wednesday 11 June 2025 15:05:36 +0000 (0:00:00.682) 0:00:02.424 ******** 2025-06-11 15:08:00.873288 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-11 15:08:00.873294 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-11 15:08:00.873300 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:08:00.873306 | orchestrator | 2025-06-11 15:08:00.873312 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-11 15:08:00.873317 | orchestrator | Wednesday 11 June 2025 15:05:37 +0000 (0:00:00.921) 0:00:03.345 ******** 2025-06-11 15:08:00.873323 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:08:00.873329 | orchestrator | 2025-06-11 15:08:00.873334 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-11 15:08:00.873340 | orchestrator | Wednesday 11 June 2025 15:05:38 +0000 (0:00:01.109) 0:00:04.455 ******** 2025-06-11 15:08:00.873356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873375 | orchestrator | 2025-06-11 15:08:00.873381 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-11 15:08:00.873392 | orchestrator | Wednesday 11 June 2025 15:05:40 +0000 (0:00:01.748) 0:00:06.203 ******** 2025-06-11 15:08:00.873398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873410 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.873416 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.873426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873433 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.873438 | orchestrator | 2025-06-11 15:08:00.873444 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-11 15:08:00.873450 | orchestrator | Wednesday 11 June 2025 15:05:40 +0000 (0:00:00.717) 0:00:06.921 ******** 2025-06-11 15:08:00.873455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873467 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.873477 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.873483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-11 15:08:00.873489 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.873495 | orchestrator | 2025-06-11 15:08:00.873501 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-11 15:08:00.873507 | orchestrator | Wednesday 11 June 2025 15:05:42 +0000 (0:00:01.726) 0:00:08.647 ******** 2025-06-11 15:08:00.873513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873535 | orchestrator | 2025-06-11 15:08:00.873541 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-11 15:08:00.873547 | orchestrator | Wednesday 11 June 2025 15:05:44 +0000 (0:00:01.696) 0:00:10.344 ******** 2025-06-11 15:08:00.873552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.873575 | orchestrator | 2025-06-11 15:08:00.873580 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-11 15:08:00.873586 | orchestrator | Wednesday 11 June 2025 15:05:46 +0000 (0:00:01.922) 0:00:12.267 ******** 2025-06-11 15:08:00.873592 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.873597 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.873603 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.873608 | orchestrator | 2025-06-11 15:08:00.873614 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-11 15:08:00.873620 | orchestrator | Wednesday 11 June 2025 15:05:47 +0000 (0:00:00.941) 0:00:13.208 ******** 2025-06-11 15:08:00.873625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-11 15:08:00.873631 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-11 15:08:00.873645 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-11 15:08:00.873652 | orchestrator | 2025-06-11 15:08:00.873711 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-11 15:08:00.873720 | orchestrator | Wednesday 11 June 2025 15:05:48 +0000 (0:00:01.837) 0:00:15.046 ******** 2025-06-11 15:08:00.873726 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-11 15:08:00.873734 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-11 15:08:00.873740 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-11 15:08:00.873747 | orchestrator | 2025-06-11 15:08:00.873753 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-11 15:08:00.873760 | orchestrator | Wednesday 11 June 2025 15:05:50 +0000 (0:00:01.549) 0:00:16.595 ******** 2025-06-11 15:08:00.873771 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:08:00.873779 | orchestrator | 2025-06-11 15:08:00.873786 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-11 15:08:00.873793 | orchestrator | Wednesday 11 June 2025 15:05:51 +0000 (0:00:00.934) 0:00:17.530 ******** 2025-06-11 15:08:00.873837 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-11 15:08:00.873844 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-11 15:08:00.873850 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:08:00.873866 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:08:00.873872 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:08:00.873877 | orchestrator | 2025-06-11 15:08:00.873883 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-11 15:08:00.873889 | orchestrator | Wednesday 11 June 2025 15:05:52 +0000 (0:00:00.710) 0:00:18.241 ******** 2025-06-11 15:08:00.873894 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.873900 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.873906 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.873911 | orchestrator | 2025-06-11 15:08:00.873917 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-11 15:08:00.873923 | orchestrator | Wednesday 11 June 2025 15:05:53 +0000 (0:00:00.933) 0:00:19.174 ******** 2025-06-11 15:08:00.873929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094385, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0480452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094385, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0480452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094385, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0480452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094367, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.042045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094367, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.042045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094367, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.042045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.873976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094349, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094349, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094349, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094379, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094379, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094379, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094338, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0330448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094338, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0330448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094338, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0330448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094354, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.039045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094354, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.039045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094354, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.039045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094378, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094378, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094378, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0450451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094332, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094332, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094332, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094318, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0260448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094318, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0260448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094318, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0260448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094341, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.034045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094341, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.034045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094341, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.034045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094325, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0290449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094325, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0290449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094325, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0290449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094371, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0440452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094371, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0440452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094371, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0440452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094342, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.036045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094342, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.036045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094342, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.036045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094382, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0460453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094382, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0460453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094382, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0460453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094331, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094331, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094331, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.032045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094360, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0410452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094360, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0410452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094360, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0410452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094323, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0270448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094323, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0270448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094323, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0270448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094327, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.031045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094327, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.031045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094327, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.031045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094347, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094347, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094347, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.037045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094481, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0750458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094481, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0750458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094481, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0750458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094472, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0640454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094472, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0640454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094472, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0640454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.874995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094400, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0500453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094400, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0500453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094400, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0500453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094552, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0810459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094552, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0810459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094552, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0810459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094404, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0510452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094404, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0510452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094404, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0510452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094545, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0790458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094545, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0790458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094545, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0790458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094559, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.083046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094559, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.083046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094559, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.083046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094531, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094531, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094531, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094543, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0780458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094543, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0780458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094543, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0780458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094408, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0520453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094408, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0520453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094408, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0520453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094475, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094475, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094475, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094565, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.084046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094565, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.084046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094565, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.084046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094550, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.080046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094550, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.080046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094550, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.080046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094419, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0570455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094419, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0570455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094419, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0570455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094412, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0530453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094412, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0530453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094412, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0530453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094441, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0590456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094441, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0590456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094441, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0590456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094449, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0630455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094449, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0630455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094449, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0630455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094478, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094478, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094478, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0650456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094540, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094540, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094540, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.077046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094479, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0660455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094479, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0660455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094479, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.0660455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094572, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.085046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094572, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.085046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094572, 'dev': 97, 'nlink': 1, 'atime': 1749600133.0, 'mtime': 1749600133.0, 'ctime': 1749651650.085046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-11 15:08:00.875446 | orchestrator | 2025-06-11 15:08:00.875453 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-11 15:08:00.875459 | orchestrator | Wednesday 11 June 2025 15:06:30 +0000 (0:00:37.665) 0:00:56.840 ******** 2025-06-11 15:08:00.875466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.875472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.875479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-11 15:08:00.875486 | orchestrator | 2025-06-11 15:08:00.875492 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-11 15:08:00.875498 | orchestrator | Wednesday 11 June 2025 15:06:31 +0000 (0:00:00.966) 0:00:57.807 ******** 2025-06-11 15:08:00.875504 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:08:00.875511 | orchestrator | 2025-06-11 15:08:00.875517 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-11 15:08:00.875523 | orchestrator | Wednesday 11 June 2025 15:06:33 +0000 (0:00:02.255) 0:01:00.062 ******** 2025-06-11 15:08:00.875529 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:08:00.875535 | orchestrator | 2025-06-11 15:08:00.875540 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-11 15:08:00.875546 | orchestrator | Wednesday 11 June 2025 15:06:36 +0000 (0:00:02.343) 0:01:02.406 ******** 2025-06-11 15:08:00.875552 | orchestrator | 2025-06-11 15:08:00.875557 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-11 15:08:00.875563 | orchestrator | Wednesday 11 June 2025 15:06:36 +0000 (0:00:00.211) 0:01:02.618 ******** 2025-06-11 15:08:00.875568 | orchestrator | 2025-06-11 15:08:00.875575 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-11 15:08:00.875585 | orchestrator | Wednesday 11 June 2025 15:06:36 +0000 (0:00:00.061) 0:01:02.679 ******** 2025-06-11 15:08:00.875596 | orchestrator | 2025-06-11 15:08:00.875604 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-11 15:08:00.875611 | orchestrator | Wednesday 11 June 2025 15:06:36 +0000 (0:00:00.063) 0:01:02.742 ******** 2025-06-11 15:08:00.875618 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.875626 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.875633 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:08:00.875639 | orchestrator | 2025-06-11 15:08:00.875645 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-11 15:08:00.875651 | orchestrator | Wednesday 11 June 2025 15:06:43 +0000 (0:00:07.120) 0:01:09.863 ******** 2025-06-11 15:08:00.875657 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.875663 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.875669 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-11 15:08:00.875675 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-11 15:08:00.875681 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-11 15:08:00.875687 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:08:00.875693 | orchestrator | 2025-06-11 15:08:00.875699 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-11 15:08:00.875706 | orchestrator | Wednesday 11 June 2025 15:07:22 +0000 (0:00:38.803) 0:01:48.667 ******** 2025-06-11 15:08:00.875712 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.875718 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:08:00.875723 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:08:00.875729 | orchestrator | 2025-06-11 15:08:00.875735 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-11 15:08:00.875741 | orchestrator | Wednesday 11 June 2025 15:07:52 +0000 (0:00:30.051) 0:02:18.718 ******** 2025-06-11 15:08:00.875747 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:08:00.875753 | orchestrator | 2025-06-11 15:08:00.875759 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-11 15:08:00.875765 | orchestrator | Wednesday 11 June 2025 15:07:55 +0000 (0:00:02.426) 0:02:21.144 ******** 2025-06-11 15:08:00.875772 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.875777 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:08:00.875783 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:08:00.875789 | orchestrator | 2025-06-11 15:08:00.875810 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-11 15:08:00.875817 | orchestrator | Wednesday 11 June 2025 15:07:55 +0000 (0:00:00.285) 0:02:21.429 ******** 2025-06-11 15:08:00.875824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-11 15:08:00.875832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-11 15:08:00.875839 | orchestrator | 2025-06-11 15:08:00.875845 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-11 15:08:00.875851 | orchestrator | Wednesday 11 June 2025 15:07:57 +0000 (0:00:02.412) 0:02:23.841 ******** 2025-06-11 15:08:00.875857 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:08:00.875863 | orchestrator | 2025-06-11 15:08:00.875869 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:08:00.875875 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-11 15:08:00.875886 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-11 15:08:00.875892 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-11 15:08:00.875898 | orchestrator | 2025-06-11 15:08:00.875904 | orchestrator | 2025-06-11 15:08:00.875911 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:08:00.875917 | orchestrator | Wednesday 11 June 2025 15:07:57 +0000 (0:00:00.243) 0:02:24.084 ******** 2025-06-11 15:08:00.875923 | orchestrator | =============================================================================== 2025-06-11 15:08:00.875929 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.80s 2025-06-11 15:08:00.875935 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.67s 2025-06-11 15:08:00.875941 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.05s 2025-06-11 15:08:00.875947 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.12s 2025-06-11 15:08:00.875953 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2025-06-11 15:08:00.875959 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.41s 2025-06-11 15:08:00.875965 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-06-11 15:08:00.875974 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.26s 2025-06-11 15:08:00.875981 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.92s 2025-06-11 15:08:00.875987 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.84s 2025-06-11 15:08:00.875993 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.75s 2025-06-11 15:08:00.875999 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.73s 2025-06-11 15:08:00.876005 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.70s 2025-06-11 15:08:00.876011 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.55s 2025-06-11 15:08:00.876017 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.11s 2025-06-11 15:08:00.876023 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.97s 2025-06-11 15:08:00.876029 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.94s 2025-06-11 15:08:00.876035 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.93s 2025-06-11 15:08:00.876041 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.93s 2025-06-11 15:08:00.876047 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.92s 2025-06-11 15:08:00.876053 | orchestrator | 2025-06-11 15:08:00 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:00.876300 | orchestrator | 2025-06-11 15:08:00 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:00.876471 | orchestrator | 2025-06-11 15:08:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:03.920627 | orchestrator | 2025-06-11 15:08:03 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:03.922214 | orchestrator | 2025-06-11 15:08:03 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:03.923687 | orchestrator | 2025-06-11 15:08:03 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:03.923737 | orchestrator | 2025-06-11 15:08:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:06.972676 | orchestrator | 2025-06-11 15:08:06 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:06.974475 | orchestrator | 2025-06-11 15:08:06 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:06.977316 | orchestrator | 2025-06-11 15:08:06 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:06.977394 | orchestrator | 2025-06-11 15:08:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:10.019689 | orchestrator | 2025-06-11 15:08:10 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:10.023992 | orchestrator | 2025-06-11 15:08:10 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:10.027753 | orchestrator | 2025-06-11 15:08:10 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:10.027854 | orchestrator | 2025-06-11 15:08:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:13.071394 | orchestrator | 2025-06-11 15:08:13 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:13.072776 | orchestrator | 2025-06-11 15:08:13 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:13.075252 | orchestrator | 2025-06-11 15:08:13 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:13.075372 | orchestrator | 2025-06-11 15:08:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:16.121248 | orchestrator | 2025-06-11 15:08:16 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:16.122834 | orchestrator | 2025-06-11 15:08:16 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:16.124621 | orchestrator | 2025-06-11 15:08:16 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:16.124672 | orchestrator | 2025-06-11 15:08:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:19.167229 | orchestrator | 2025-06-11 15:08:19 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:19.168300 | orchestrator | 2025-06-11 15:08:19 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:19.170099 | orchestrator | 2025-06-11 15:08:19 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:19.170279 | orchestrator | 2025-06-11 15:08:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:22.214710 | orchestrator | 2025-06-11 15:08:22 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:22.215949 | orchestrator | 2025-06-11 15:08:22 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:22.217837 | orchestrator | 2025-06-11 15:08:22 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:22.217884 | orchestrator | 2025-06-11 15:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:25.262883 | orchestrator | 2025-06-11 15:08:25 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:25.263917 | orchestrator | 2025-06-11 15:08:25 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:25.265851 | orchestrator | 2025-06-11 15:08:25 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:25.265887 | orchestrator | 2025-06-11 15:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:28.308323 | orchestrator | 2025-06-11 15:08:28 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:28.309684 | orchestrator | 2025-06-11 15:08:28 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:28.311654 | orchestrator | 2025-06-11 15:08:28 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:28.311690 | orchestrator | 2025-06-11 15:08:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:31.354404 | orchestrator | 2025-06-11 15:08:31 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:31.355524 | orchestrator | 2025-06-11 15:08:31 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:31.356996 | orchestrator | 2025-06-11 15:08:31 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:31.357131 | orchestrator | 2025-06-11 15:08:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:34.401736 | orchestrator | 2025-06-11 15:08:34 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:34.403627 | orchestrator | 2025-06-11 15:08:34 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:34.405437 | orchestrator | 2025-06-11 15:08:34 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:34.405465 | orchestrator | 2025-06-11 15:08:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:37.447281 | orchestrator | 2025-06-11 15:08:37 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:37.449412 | orchestrator | 2025-06-11 15:08:37 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:37.451572 | orchestrator | 2025-06-11 15:08:37 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state STARTED 2025-06-11 15:08:37.451908 | orchestrator | 2025-06-11 15:08:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:40.505911 | orchestrator | 2025-06-11 15:08:40 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:40.508510 | orchestrator | 2025-06-11 15:08:40 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:40.511589 | orchestrator | 2025-06-11 15:08:40 | INFO  | Task 2c51edd0-23d9-4ff7-ac04-7e3b120b5794 is in state SUCCESS 2025-06-11 15:08:40.511627 | orchestrator | 2025-06-11 15:08:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:43.558402 | orchestrator | 2025-06-11 15:08:43 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:43.560125 | orchestrator | 2025-06-11 15:08:43 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:43.560164 | orchestrator | 2025-06-11 15:08:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:46.605088 | orchestrator | 2025-06-11 15:08:46 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:46.605251 | orchestrator | 2025-06-11 15:08:46 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:46.605269 | orchestrator | 2025-06-11 15:08:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:49.646297 | orchestrator | 2025-06-11 15:08:49 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:49.647458 | orchestrator | 2025-06-11 15:08:49 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:49.647675 | orchestrator | 2025-06-11 15:08:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:52.692320 | orchestrator | 2025-06-11 15:08:52 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:52.692487 | orchestrator | 2025-06-11 15:08:52 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:52.692505 | orchestrator | 2025-06-11 15:08:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:55.733347 | orchestrator | 2025-06-11 15:08:55 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:55.734825 | orchestrator | 2025-06-11 15:08:55 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:55.734859 | orchestrator | 2025-06-11 15:08:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:08:58.778459 | orchestrator | 2025-06-11 15:08:58 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:08:58.781115 | orchestrator | 2025-06-11 15:08:58 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:08:58.781147 | orchestrator | 2025-06-11 15:08:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:01.826173 | orchestrator | 2025-06-11 15:09:01 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:01.827442 | orchestrator | 2025-06-11 15:09:01 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:01.827556 | orchestrator | 2025-06-11 15:09:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:04.872683 | orchestrator | 2025-06-11 15:09:04 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:04.873681 | orchestrator | 2025-06-11 15:09:04 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:04.873713 | orchestrator | 2025-06-11 15:09:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:07.922457 | orchestrator | 2025-06-11 15:09:07 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:07.925399 | orchestrator | 2025-06-11 15:09:07 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:07.925710 | orchestrator | 2025-06-11 15:09:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:10.969891 | orchestrator | 2025-06-11 15:09:10 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:10.972061 | orchestrator | 2025-06-11 15:09:10 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:10.972108 | orchestrator | 2025-06-11 15:09:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:14.019846 | orchestrator | 2025-06-11 15:09:14 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:14.021625 | orchestrator | 2025-06-11 15:09:14 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:14.022064 | orchestrator | 2025-06-11 15:09:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:17.071047 | orchestrator | 2025-06-11 15:09:17 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:17.071854 | orchestrator | 2025-06-11 15:09:17 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:17.071888 | orchestrator | 2025-06-11 15:09:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:20.114516 | orchestrator | 2025-06-11 15:09:20 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:20.116113 | orchestrator | 2025-06-11 15:09:20 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:20.116160 | orchestrator | 2025-06-11 15:09:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:23.156921 | orchestrator | 2025-06-11 15:09:23 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:23.159070 | orchestrator | 2025-06-11 15:09:23 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:23.159116 | orchestrator | 2025-06-11 15:09:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:26.203176 | orchestrator | 2025-06-11 15:09:26 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:26.205447 | orchestrator | 2025-06-11 15:09:26 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:26.205494 | orchestrator | 2025-06-11 15:09:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:29.247394 | orchestrator | 2025-06-11 15:09:29 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:29.248220 | orchestrator | 2025-06-11 15:09:29 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state STARTED 2025-06-11 15:09:29.248282 | orchestrator | 2025-06-11 15:09:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:32.294077 | orchestrator | 2025-06-11 15:09:32 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:32.295197 | orchestrator | 2025-06-11 15:09:32 | INFO  | Task 56d11f9a-0261-4811-8d50-a81307418915 is in state SUCCESS 2025-06-11 15:09:32.295229 | orchestrator | 2025-06-11 15:09:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:35.338613 | orchestrator | 2025-06-11 15:09:35 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:35.338773 | orchestrator | 2025-06-11 15:09:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:38.373174 | orchestrator | 2025-06-11 15:09:38 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:38.373296 | orchestrator | 2025-06-11 15:09:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:41.418090 | orchestrator | 2025-06-11 15:09:41 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:41.418204 | orchestrator | 2025-06-11 15:09:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:44.462477 | orchestrator | 2025-06-11 15:09:44 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:44.462784 | orchestrator | 2025-06-11 15:09:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:47.499422 | orchestrator | 2025-06-11 15:09:47 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:47.499525 | orchestrator | 2025-06-11 15:09:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:50.542771 | orchestrator | 2025-06-11 15:09:50 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:50.542909 | orchestrator | 2025-06-11 15:09:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:53.571503 | orchestrator | 2025-06-11 15:09:53 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:53.571642 | orchestrator | 2025-06-11 15:09:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:56.611730 | orchestrator | 2025-06-11 15:09:56 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:56.611852 | orchestrator | 2025-06-11 15:09:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:09:59.658398 | orchestrator | 2025-06-11 15:09:59 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:09:59.658584 | orchestrator | 2025-06-11 15:09:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:02.703094 | orchestrator | 2025-06-11 15:10:02 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:02.705333 | orchestrator | 2025-06-11 15:10:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:05.742382 | orchestrator | 2025-06-11 15:10:05 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:05.742523 | orchestrator | 2025-06-11 15:10:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:08.794698 | orchestrator | 2025-06-11 15:10:08 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:08.794816 | orchestrator | 2025-06-11 15:10:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:11.840636 | orchestrator | 2025-06-11 15:10:11 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:11.840855 | orchestrator | 2025-06-11 15:10:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:14.886323 | orchestrator | 2025-06-11 15:10:14 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:14.886433 | orchestrator | 2025-06-11 15:10:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:17.923510 | orchestrator | 2025-06-11 15:10:17 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:17.923622 | orchestrator | 2025-06-11 15:10:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:20.958438 | orchestrator | 2025-06-11 15:10:20 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:20.958545 | orchestrator | 2025-06-11 15:10:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:24.004116 | orchestrator | 2025-06-11 15:10:24 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:24.004224 | orchestrator | 2025-06-11 15:10:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:27.037253 | orchestrator | 2025-06-11 15:10:27 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:27.037939 | orchestrator | 2025-06-11 15:10:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:30.085018 | orchestrator | 2025-06-11 15:10:30 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:30.085115 | orchestrator | 2025-06-11 15:10:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:33.120082 | orchestrator | 2025-06-11 15:10:33 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:33.120192 | orchestrator | 2025-06-11 15:10:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:36.182743 | orchestrator | 2025-06-11 15:10:36 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:36.182901 | orchestrator | 2025-06-11 15:10:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:39.215238 | orchestrator | 2025-06-11 15:10:39 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:39.215375 | orchestrator | 2025-06-11 15:10:39 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:42.265172 | orchestrator | 2025-06-11 15:10:42 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:42.265307 | orchestrator | 2025-06-11 15:10:42 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:45.296053 | orchestrator | 2025-06-11 15:10:45 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:45.296174 | orchestrator | 2025-06-11 15:10:45 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:48.333508 | orchestrator | 2025-06-11 15:10:48 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:48.333623 | orchestrator | 2025-06-11 15:10:48 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:51.379991 | orchestrator | 2025-06-11 15:10:51 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:51.380100 | orchestrator | 2025-06-11 15:10:51 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:54.419560 | orchestrator | 2025-06-11 15:10:54 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:54.419670 | orchestrator | 2025-06-11 15:10:54 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:10:57.448567 | orchestrator | 2025-06-11 15:10:57 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:10:57.448668 | orchestrator | 2025-06-11 15:10:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:00.495555 | orchestrator | 2025-06-11 15:11:00 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:00.495667 | orchestrator | 2025-06-11 15:11:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:03.537229 | orchestrator | 2025-06-11 15:11:03 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:03.537345 | orchestrator | 2025-06-11 15:11:03 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:06.580735 | orchestrator | 2025-06-11 15:11:06 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:06.580911 | orchestrator | 2025-06-11 15:11:06 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:09.626296 | orchestrator | 2025-06-11 15:11:09 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:09.626411 | orchestrator | 2025-06-11 15:11:09 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:12.663376 | orchestrator | 2025-06-11 15:11:12 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:12.663482 | orchestrator | 2025-06-11 15:11:12 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:15.709258 | orchestrator | 2025-06-11 15:11:15 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:15.709368 | orchestrator | 2025-06-11 15:11:15 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:18.744691 | orchestrator | 2025-06-11 15:11:18 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:18.744802 | orchestrator | 2025-06-11 15:11:18 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:21.789038 | orchestrator | 2025-06-11 15:11:21 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:21.789159 | orchestrator | 2025-06-11 15:11:21 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:24.825577 | orchestrator | 2025-06-11 15:11:24 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:24.825688 | orchestrator | 2025-06-11 15:11:24 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:27.867899 | orchestrator | 2025-06-11 15:11:27 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:27.868013 | orchestrator | 2025-06-11 15:11:27 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:30.911356 | orchestrator | 2025-06-11 15:11:30 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:30.911469 | orchestrator | 2025-06-11 15:11:30 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:33.952731 | orchestrator | 2025-06-11 15:11:33 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:33.952841 | orchestrator | 2025-06-11 15:11:33 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:36.994497 | orchestrator | 2025-06-11 15:11:36 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:36.994607 | orchestrator | 2025-06-11 15:11:36 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:40.037355 | orchestrator | 2025-06-11 15:11:40 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:40.037475 | orchestrator | 2025-06-11 15:11:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:43.074868 | orchestrator | 2025-06-11 15:11:43 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:43.075021 | orchestrator | 2025-06-11 15:11:43 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:46.135767 | orchestrator | 2025-06-11 15:11:46 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:46.135874 | orchestrator | 2025-06-11 15:11:46 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:49.193185 | orchestrator | 2025-06-11 15:11:49 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:49.193329 | orchestrator | 2025-06-11 15:11:49 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:52.236416 | orchestrator | 2025-06-11 15:11:52 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:52.238069 | orchestrator | 2025-06-11 15:11:52 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:55.283695 | orchestrator | 2025-06-11 15:11:55 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:55.283820 | orchestrator | 2025-06-11 15:11:55 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:11:58.323786 | orchestrator | 2025-06-11 15:11:58 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:11:58.323900 | orchestrator | 2025-06-11 15:11:58 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:01.362504 | orchestrator | 2025-06-11 15:12:01 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:01.362647 | orchestrator | 2025-06-11 15:12:01 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:04.408698 | orchestrator | 2025-06-11 15:12:04 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:04.408812 | orchestrator | 2025-06-11 15:12:04 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:07.452391 | orchestrator | 2025-06-11 15:12:07 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:07.452497 | orchestrator | 2025-06-11 15:12:07 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:10.498632 | orchestrator | 2025-06-11 15:12:10 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state STARTED 2025-06-11 15:12:10.499933 | orchestrator | 2025-06-11 15:12:10 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:10.500218 | orchestrator | 2025-06-11 15:12:10 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:13.544604 | orchestrator | 2025-06-11 15:12:13 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state STARTED 2025-06-11 15:12:13.545764 | orchestrator | 2025-06-11 15:12:13 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:13.545832 | orchestrator | 2025-06-11 15:12:13 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:16.606450 | orchestrator | 2025-06-11 15:12:16 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state STARTED 2025-06-11 15:12:16.606555 | orchestrator | 2025-06-11 15:12:16 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:16.606572 | orchestrator | 2025-06-11 15:12:16 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:19.641648 | orchestrator | 2025-06-11 15:12:19 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state STARTED 2025-06-11 15:12:19.643266 | orchestrator | 2025-06-11 15:12:19 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:19.643309 | orchestrator | 2025-06-11 15:12:19 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:22.699570 | orchestrator | 2025-06-11 15:12:22 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state STARTED 2025-06-11 15:12:22.701380 | orchestrator | 2025-06-11 15:12:22 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:22.701427 | orchestrator | 2025-06-11 15:12:22 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:25.750584 | orchestrator | 2025-06-11 15:12:25 | INFO  | Task bac8bcd3-7c23-47ae-931e-bd5784dc9380 is in state SUCCESS 2025-06-11 15:12:25.752315 | orchestrator | 2025-06-11 15:12:25 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:25.752357 | orchestrator | 2025-06-11 15:12:25 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:28.784346 | orchestrator | 2025-06-11 15:12:28 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:28.784457 | orchestrator | 2025-06-11 15:12:28 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:31.831473 | orchestrator | 2025-06-11 15:12:31 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:31.831588 | orchestrator | 2025-06-11 15:12:31 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:34.881083 | orchestrator | 2025-06-11 15:12:34 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:34.881139 | orchestrator | 2025-06-11 15:12:34 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:37.924052 | orchestrator | 2025-06-11 15:12:37 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:37.924160 | orchestrator | 2025-06-11 15:12:37 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:40.973248 | orchestrator | 2025-06-11 15:12:40 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:40.973365 | orchestrator | 2025-06-11 15:12:40 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:44.018500 | orchestrator | 2025-06-11 15:12:44 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:44.018600 | orchestrator | 2025-06-11 15:12:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:47.062325 | orchestrator | 2025-06-11 15:12:47 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:47.062449 | orchestrator | 2025-06-11 15:12:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:50.103380 | orchestrator | 2025-06-11 15:12:50 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:50.103495 | orchestrator | 2025-06-11 15:12:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:53.144315 | orchestrator | 2025-06-11 15:12:53 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:53.144449 | orchestrator | 2025-06-11 15:12:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:56.195140 | orchestrator | 2025-06-11 15:12:56 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:56.195250 | orchestrator | 2025-06-11 15:12:56 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:12:59.238127 | orchestrator | 2025-06-11 15:12:59 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:12:59.238234 | orchestrator | 2025-06-11 15:12:59 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:02.272554 | orchestrator | 2025-06-11 15:13:02 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:02.272680 | orchestrator | 2025-06-11 15:13:02 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:05.317739 | orchestrator | 2025-06-11 15:13:05 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:05.317850 | orchestrator | 2025-06-11 15:13:05 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:08.359337 | orchestrator | 2025-06-11 15:13:08 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:08.359444 | orchestrator | 2025-06-11 15:13:08 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:11.404177 | orchestrator | 2025-06-11 15:13:11 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:11.404292 | orchestrator | 2025-06-11 15:13:11 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:14.439510 | orchestrator | 2025-06-11 15:13:14 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:14.439621 | orchestrator | 2025-06-11 15:13:14 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:17.475791 | orchestrator | 2025-06-11 15:13:17 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:17.475929 | orchestrator | 2025-06-11 15:13:17 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:20.522174 | orchestrator | 2025-06-11 15:13:20 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:20.522290 | orchestrator | 2025-06-11 15:13:20 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:23.553102 | orchestrator | 2025-06-11 15:13:23 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:23.553197 | orchestrator | 2025-06-11 15:13:23 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:26.581697 | orchestrator | 2025-06-11 15:13:26 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:26.581778 | orchestrator | 2025-06-11 15:13:26 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:29.603607 | orchestrator | 2025-06-11 15:13:29 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:29.603697 | orchestrator | 2025-06-11 15:13:29 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:32.647169 | orchestrator | 2025-06-11 15:13:32 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:32.647250 | orchestrator | 2025-06-11 15:13:32 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:35.687232 | orchestrator | 2025-06-11 15:13:35 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:35.687328 | orchestrator | 2025-06-11 15:13:35 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:38.725339 | orchestrator | 2025-06-11 15:13:38 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:38.725481 | orchestrator | 2025-06-11 15:13:38 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:41.770401 | orchestrator | 2025-06-11 15:13:41 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:41.770514 | orchestrator | 2025-06-11 15:13:41 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:44.813980 | orchestrator | 2025-06-11 15:13:44 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:44.814209 | orchestrator | 2025-06-11 15:13:44 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:47.872376 | orchestrator | 2025-06-11 15:13:47 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:47.872486 | orchestrator | 2025-06-11 15:13:47 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:50.913518 | orchestrator | 2025-06-11 15:13:50 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:50.913606 | orchestrator | 2025-06-11 15:13:50 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:53.965389 | orchestrator | 2025-06-11 15:13:53 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:53.965522 | orchestrator | 2025-06-11 15:13:53 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:13:57.013433 | orchestrator | 2025-06-11 15:13:57 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:13:57.013543 | orchestrator | 2025-06-11 15:13:57 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:14:00.061276 | orchestrator | 2025-06-11 15:14:00 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state STARTED 2025-06-11 15:14:00.061366 | orchestrator | 2025-06-11 15:14:00 | INFO  | Wait 1 second(s) until the next check 2025-06-11 15:14:03.115230 | orchestrator | 2025-06-11 15:14:03.115406 | orchestrator | 2025-06-11 15:14:03.115424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:14:03.115436 | orchestrator | 2025-06-11 15:14:03.115448 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:14:03.115460 | orchestrator | Wednesday 11 June 2025 15:07:46 +0000 (0:00:00.313) 0:00:00.313 ******** 2025-06-11 15:14:03.115471 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.115483 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:14:03.115494 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:14:03.115505 | orchestrator | 2025-06-11 15:14:03.115516 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:14:03.115527 | orchestrator | Wednesday 11 June 2025 15:07:46 +0000 (0:00:00.313) 0:00:00.627 ******** 2025-06-11 15:14:03.115538 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-11 15:14:03.115599 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-11 15:14:03.115611 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-11 15:14:03.115621 | orchestrator | 2025-06-11 15:14:03.115632 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-11 15:14:03.115643 | orchestrator | 2025-06-11 15:14:03.115654 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-11 15:14:03.115665 | orchestrator | Wednesday 11 June 2025 15:07:46 +0000 (0:00:00.383) 0:00:01.011 ******** 2025-06-11 15:14:03.115676 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.115690 | orchestrator | 2025-06-11 15:14:03.115703 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-11 15:14:03.115715 | orchestrator | Wednesday 11 June 2025 15:07:47 +0000 (0:00:00.533) 0:00:01.544 ******** 2025-06-11 15:14:03.115728 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-11 15:14:03.115767 | orchestrator | 2025-06-11 15:14:03.115779 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-11 15:14:03.115791 | orchestrator | Wednesday 11 June 2025 15:07:50 +0000 (0:00:03.442) 0:00:04.987 ******** 2025-06-11 15:14:03.115803 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-11 15:14:03.115815 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-11 15:14:03.115827 | orchestrator | 2025-06-11 15:14:03.115840 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-11 15:14:03.115852 | orchestrator | Wednesday 11 June 2025 15:07:57 +0000 (0:00:06.775) 0:00:11.762 ******** 2025-06-11 15:14:03.115865 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:14:03.115877 | orchestrator | 2025-06-11 15:14:03.115889 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-11 15:14:03.115901 | orchestrator | Wednesday 11 June 2025 15:08:00 +0000 (0:00:03.208) 0:00:14.971 ******** 2025-06-11 15:14:03.115914 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:14:03.115928 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-11 15:14:03.115941 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-11 15:14:03.115953 | orchestrator | 2025-06-11 15:14:03.115965 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-11 15:14:03.115978 | orchestrator | Wednesday 11 June 2025 15:08:08 +0000 (0:00:08.087) 0:00:23.058 ******** 2025-06-11 15:14:03.115990 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:14:03.116002 | orchestrator | 2025-06-11 15:14:03.116015 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-11 15:14:03.116028 | orchestrator | Wednesday 11 June 2025 15:08:12 +0000 (0:00:03.225) 0:00:26.284 ******** 2025-06-11 15:14:03.116041 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-11 15:14:03.116053 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-11 15:14:03.116065 | orchestrator | 2025-06-11 15:14:03.116075 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-11 15:14:03.116086 | orchestrator | Wednesday 11 June 2025 15:08:19 +0000 (0:00:07.197) 0:00:33.481 ******** 2025-06-11 15:14:03.116097 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-11 15:14:03.116107 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-11 15:14:03.116118 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-11 15:14:03.116177 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-11 15:14:03.116190 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-11 15:14:03.116201 | orchestrator | 2025-06-11 15:14:03.116211 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-11 15:14:03.116222 | orchestrator | Wednesday 11 June 2025 15:08:35 +0000 (0:00:15.828) 0:00:49.310 ******** 2025-06-11 15:14:03.116232 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.116243 | orchestrator | 2025-06-11 15:14:03.116253 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-11 15:14:03.116264 | orchestrator | Wednesday 11 June 2025 15:08:35 +0000 (0:00:00.613) 0:00:49.924 ******** 2025-06-11 15:14:03.116298 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-06-11 15:14:03.116322 | orchestrator | 2025-06-11 15:14:03.116334 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:14:03.116346 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116358 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116369 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116380 | orchestrator | 2025-06-11 15:14:03.116391 | orchestrator | 2025-06-11 15:14:03.116401 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:14:03.116412 | orchestrator | Wednesday 11 June 2025 15:08:39 +0000 (0:00:03.463) 0:00:53.387 ******** 2025-06-11 15:14:03.116423 | orchestrator | =============================================================================== 2025-06-11 15:14:03.116433 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.83s 2025-06-11 15:14:03.116444 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2025-06-11 15:14:03.116454 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.20s 2025-06-11 15:14:03.116465 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.78s 2025-06-11 15:14:03.116475 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.46s 2025-06-11 15:14:03.116486 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.44s 2025-06-11 15:14:03.116496 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.23s 2025-06-11 15:14:03.116507 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.21s 2025-06-11 15:14:03.116517 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.61s 2025-06-11 15:14:03.116528 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-06-11 15:14:03.116539 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-06-11 15:14:03.116549 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-11 15:14:03.116560 | orchestrator | 2025-06-11 15:14:03.116571 | orchestrator | 2025-06-11 15:14:03.116581 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:14:03.116592 | orchestrator | 2025-06-11 15:14:03.116603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:14:03.116614 | orchestrator | Wednesday 11 June 2025 15:07:15 +0000 (0:00:00.177) 0:00:00.177 ******** 2025-06-11 15:14:03.116624 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.116635 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:14:03.116645 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:14:03.116656 | orchestrator | 2025-06-11 15:14:03.116667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:14:03.116677 | orchestrator | Wednesday 11 June 2025 15:07:16 +0000 (0:00:00.302) 0:00:00.480 ******** 2025-06-11 15:14:03.116688 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-11 15:14:03.116699 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-11 15:14:03.116709 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-11 15:14:03.116720 | orchestrator | 2025-06-11 15:14:03.116730 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-11 15:14:03.116741 | orchestrator | 2025-06-11 15:14:03.116751 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-11 15:14:03.116762 | orchestrator | Wednesday 11 June 2025 15:07:16 +0000 (0:00:00.652) 0:00:01.133 ******** 2025-06-11 15:14:03.116772 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.116783 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:14:03.116794 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:14:03.116811 | orchestrator | 2025-06-11 15:14:03.116822 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:14:03.116832 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116843 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116854 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.116865 | orchestrator | 2025-06-11 15:14:03.116875 | orchestrator | 2025-06-11 15:14:03.116886 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:14:03.116896 | orchestrator | Wednesday 11 June 2025 15:09:31 +0000 (0:02:14.751) 0:02:15.884 ******** 2025-06-11 15:14:03.116907 | orchestrator | =============================================================================== 2025-06-11 15:14:03.116917 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 134.75s 2025-06-11 15:14:03.116928 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-06-11 15:14:03.116938 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-11 15:14:03.116949 | orchestrator | 2025-06-11 15:14:03.116960 | orchestrator | None 2025-06-11 15:14:03.116971 | orchestrator | 2025-06-11 15:14:03.116982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:14:03.116992 | orchestrator | 2025-06-11 15:14:03.117003 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-11 15:14:03.117020 | orchestrator | Wednesday 11 June 2025 15:05:06 +0000 (0:00:00.252) 0:00:00.253 ******** 2025-06-11 15:14:03.117031 | orchestrator | changed: [testbed-manager] 2025-06-11 15:14:03.117042 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117052 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.117063 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.117073 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.117083 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.117094 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.117104 | orchestrator | 2025-06-11 15:14:03.117115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:14:03.117151 | orchestrator | Wednesday 11 June 2025 15:05:07 +0000 (0:00:00.704) 0:00:00.957 ******** 2025-06-11 15:14:03.117165 | orchestrator | changed: [testbed-manager] 2025-06-11 15:14:03.117175 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117186 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.117196 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.117206 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.117217 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.117227 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.117237 | orchestrator | 2025-06-11 15:14:03.117248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:14:03.117259 | orchestrator | Wednesday 11 June 2025 15:05:08 +0000 (0:00:00.570) 0:00:01.528 ******** 2025-06-11 15:14:03.117269 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-11 15:14:03.117280 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-11 15:14:03.117290 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-11 15:14:03.117301 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-11 15:14:03.117311 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-11 15:14:03.117321 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-11 15:14:03.117332 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-11 15:14:03.117342 | orchestrator | 2025-06-11 15:14:03.117353 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-11 15:14:03.117364 | orchestrator | 2025-06-11 15:14:03.117381 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-11 15:14:03.117392 | orchestrator | Wednesday 11 June 2025 15:05:08 +0000 (0:00:00.755) 0:00:02.283 ******** 2025-06-11 15:14:03.117402 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.117413 | orchestrator | 2025-06-11 15:14:03.117423 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-11 15:14:03.117434 | orchestrator | Wednesday 11 June 2025 15:05:09 +0000 (0:00:00.722) 0:00:03.005 ******** 2025-06-11 15:14:03.117444 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-11 15:14:03.117455 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-11 15:14:03.117466 | orchestrator | 2025-06-11 15:14:03.117476 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-11 15:14:03.117487 | orchestrator | Wednesday 11 June 2025 15:05:13 +0000 (0:00:04.249) 0:00:07.255 ******** 2025-06-11 15:14:03.117497 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 15:14:03.117508 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-11 15:14:03.117518 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117529 | orchestrator | 2025-06-11 15:14:03.117539 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-11 15:14:03.117550 | orchestrator | Wednesday 11 June 2025 15:05:18 +0000 (0:00:04.366) 0:00:11.621 ******** 2025-06-11 15:14:03.117560 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117571 | orchestrator | 2025-06-11 15:14:03.117581 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-11 15:14:03.117592 | orchestrator | Wednesday 11 June 2025 15:05:18 +0000 (0:00:00.594) 0:00:12.216 ******** 2025-06-11 15:14:03.117602 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117613 | orchestrator | 2025-06-11 15:14:03.117623 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-11 15:14:03.117633 | orchestrator | Wednesday 11 June 2025 15:05:20 +0000 (0:00:01.224) 0:00:13.440 ******** 2025-06-11 15:14:03.117644 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117654 | orchestrator | 2025-06-11 15:14:03.117665 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-11 15:14:03.117675 | orchestrator | Wednesday 11 June 2025 15:05:22 +0000 (0:00:02.466) 0:00:15.907 ******** 2025-06-11 15:14:03.117686 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.117696 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.117707 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.117717 | orchestrator | 2025-06-11 15:14:03.117727 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-11 15:14:03.117738 | orchestrator | Wednesday 11 June 2025 15:05:22 +0000 (0:00:00.289) 0:00:16.196 ******** 2025-06-11 15:14:03.117749 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.117759 | orchestrator | 2025-06-11 15:14:03.117769 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-11 15:14:03.117780 | orchestrator | Wednesday 11 June 2025 15:06:12 +0000 (0:00:49.541) 0:01:05.738 ******** 2025-06-11 15:14:03.117791 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.117801 | orchestrator | 2025-06-11 15:14:03.117811 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-11 15:14:03.117822 | orchestrator | Wednesday 11 June 2025 15:06:28 +0000 (0:00:15.863) 0:01:21.601 ******** 2025-06-11 15:14:03.117833 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.117843 | orchestrator | 2025-06-11 15:14:03.117854 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-11 15:14:03.117864 | orchestrator | Wednesday 11 June 2025 15:06:42 +0000 (0:00:14.068) 0:01:35.670 ******** 2025-06-11 15:14:03.117875 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.117885 | orchestrator | 2025-06-11 15:14:03.117896 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-11 15:14:03.117907 | orchestrator | Wednesday 11 June 2025 15:06:43 +0000 (0:00:01.164) 0:01:36.835 ******** 2025-06-11 15:14:03.117924 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.117935 | orchestrator | 2025-06-11 15:14:03.117953 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-11 15:14:03.117964 | orchestrator | Wednesday 11 June 2025 15:06:43 +0000 (0:00:00.559) 0:01:37.394 ******** 2025-06-11 15:14:03.117975 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.117986 | orchestrator | 2025-06-11 15:14:03.117996 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-11 15:14:03.118007 | orchestrator | Wednesday 11 June 2025 15:06:44 +0000 (0:00:00.821) 0:01:38.216 ******** 2025-06-11 15:14:03.118069 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.118081 | orchestrator | 2025-06-11 15:14:03.118092 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-11 15:14:03.118102 | orchestrator | Wednesday 11 June 2025 15:07:03 +0000 (0:00:18.858) 0:01:57.074 ******** 2025-06-11 15:14:03.118113 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.118123 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118193 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118205 | orchestrator | 2025-06-11 15:14:03.118215 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-11 15:14:03.118226 | orchestrator | 2025-06-11 15:14:03.118237 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-11 15:14:03.118247 | orchestrator | Wednesday 11 June 2025 15:07:04 +0000 (0:00:00.375) 0:01:57.450 ******** 2025-06-11 15:14:03.118258 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.118268 | orchestrator | 2025-06-11 15:14:03.118279 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-11 15:14:03.118289 | orchestrator | Wednesday 11 June 2025 15:07:04 +0000 (0:00:00.608) 0:01:58.059 ******** 2025-06-11 15:14:03.118300 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118310 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118321 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.118331 | orchestrator | 2025-06-11 15:14:03.118342 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-11 15:14:03.118352 | orchestrator | Wednesday 11 June 2025 15:07:06 +0000 (0:00:02.308) 0:02:00.367 ******** 2025-06-11 15:14:03.118363 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118373 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118384 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.118394 | orchestrator | 2025-06-11 15:14:03.118405 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-11 15:14:03.118414 | orchestrator | Wednesday 11 June 2025 15:07:09 +0000 (0:00:02.451) 0:02:02.818 ******** 2025-06-11 15:14:03.118423 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.118433 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118442 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118451 | orchestrator | 2025-06-11 15:14:03.118461 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-11 15:14:03.118470 | orchestrator | Wednesday 11 June 2025 15:07:09 +0000 (0:00:00.334) 0:02:03.153 ******** 2025-06-11 15:14:03.118480 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-11 15:14:03.118489 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118498 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-11 15:14:03.118508 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118517 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-11 15:14:03.118526 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-11 15:14:03.118536 | orchestrator | 2025-06-11 15:14:03.118545 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-11 15:14:03.118555 | orchestrator | Wednesday 11 June 2025 15:07:19 +0000 (0:00:09.265) 0:02:12.419 ******** 2025-06-11 15:14:03.118564 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.118581 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118590 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118599 | orchestrator | 2025-06-11 15:14:03.118609 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-11 15:14:03.118618 | orchestrator | Wednesday 11 June 2025 15:07:19 +0000 (0:00:00.349) 0:02:12.769 ******** 2025-06-11 15:14:03.118627 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-11 15:14:03.118637 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.118646 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-11 15:14:03.118655 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118665 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-11 15:14:03.118674 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118683 | orchestrator | 2025-06-11 15:14:03.118693 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-11 15:14:03.118702 | orchestrator | Wednesday 11 June 2025 15:07:19 +0000 (0:00:00.618) 0:02:13.388 ******** 2025-06-11 15:14:03.118711 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118720 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.118730 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118739 | orchestrator | 2025-06-11 15:14:03.118748 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-11 15:14:03.118758 | orchestrator | Wednesday 11 June 2025 15:07:20 +0000 (0:00:00.516) 0:02:13.905 ******** 2025-06-11 15:14:03.118767 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118777 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118786 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.118795 | orchestrator | 2025-06-11 15:14:03.118804 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-11 15:14:03.118814 | orchestrator | Wednesday 11 June 2025 15:07:21 +0000 (0:00:01.045) 0:02:14.950 ******** 2025-06-11 15:14:03.118824 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118833 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118842 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.118852 | orchestrator | 2025-06-11 15:14:03.118861 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-11 15:14:03.118871 | orchestrator | Wednesday 11 June 2025 15:07:23 +0000 (0:00:02.021) 0:02:16.972 ******** 2025-06-11 15:14:03.118886 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118896 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118905 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.118914 | orchestrator | 2025-06-11 15:14:03.118924 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-11 15:14:03.118933 | orchestrator | Wednesday 11 June 2025 15:07:45 +0000 (0:00:21.497) 0:02:38.469 ******** 2025-06-11 15:14:03.118942 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.118952 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.118961 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.118970 | orchestrator | 2025-06-11 15:14:03.118980 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-11 15:14:03.118989 | orchestrator | Wednesday 11 June 2025 15:07:57 +0000 (0:00:12.287) 0:02:50.756 ******** 2025-06-11 15:14:03.118999 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.119008 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.119017 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.119027 | orchestrator | 2025-06-11 15:14:03.119036 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-11 15:14:03.119083 | orchestrator | Wednesday 11 June 2025 15:07:58 +0000 (0:00:00.834) 0:02:51.591 ******** 2025-06-11 15:14:03.119094 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.119103 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.119113 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.119208 | orchestrator | 2025-06-11 15:14:03.119222 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-11 15:14:03.119239 | orchestrator | Wednesday 11 June 2025 15:08:09 +0000 (0:00:11.402) 0:03:02.994 ******** 2025-06-11 15:14:03.119249 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.119258 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.119268 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.119277 | orchestrator | 2025-06-11 15:14:03.119286 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-11 15:14:03.119296 | orchestrator | Wednesday 11 June 2025 15:08:10 +0000 (0:00:01.408) 0:03:04.402 ******** 2025-06-11 15:14:03.119305 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.119408 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.119418 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.119466 | orchestrator | 2025-06-11 15:14:03.119478 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-11 15:14:03.119488 | orchestrator | 2025-06-11 15:14:03.119534 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-11 15:14:03.119545 | orchestrator | Wednesday 11 June 2025 15:08:11 +0000 (0:00:00.344) 0:03:04.746 ******** 2025-06-11 15:14:03.119554 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.119564 | orchestrator | 2025-06-11 15:14:03.119573 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-11 15:14:03.119583 | orchestrator | Wednesday 11 June 2025 15:08:11 +0000 (0:00:00.539) 0:03:05.286 ******** 2025-06-11 15:14:03.119592 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-11 15:14:03.119602 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-11 15:14:03.119611 | orchestrator | 2025-06-11 15:14:03.119621 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-11 15:14:03.119630 | orchestrator | Wednesday 11 June 2025 15:08:15 +0000 (0:00:03.353) 0:03:08.639 ******** 2025-06-11 15:14:03.119684 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-11 15:14:03.119695 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-11 15:14:03.119705 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-11 15:14:03.119715 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-11 15:14:03.119724 | orchestrator | 2025-06-11 15:14:03.119734 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-11 15:14:03.119743 | orchestrator | Wednesday 11 June 2025 15:08:21 +0000 (0:00:06.540) 0:03:15.179 ******** 2025-06-11 15:14:03.119753 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-11 15:14:03.119762 | orchestrator | 2025-06-11 15:14:03.119771 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-11 15:14:03.119781 | orchestrator | Wednesday 11 June 2025 15:08:25 +0000 (0:00:03.235) 0:03:18.415 ******** 2025-06-11 15:14:03.119790 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-11 15:14:03.119800 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-11 15:14:03.119809 | orchestrator | 2025-06-11 15:14:03.119819 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-11 15:14:03.119828 | orchestrator | Wednesday 11 June 2025 15:08:29 +0000 (0:00:04.108) 0:03:22.523 ******** 2025-06-11 15:14:03.119838 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-11 15:14:03.119847 | orchestrator | 2025-06-11 15:14:03.119856 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-11 15:14:03.119866 | orchestrator | Wednesday 11 June 2025 15:08:32 +0000 (0:00:03.301) 0:03:25.825 ******** 2025-06-11 15:14:03.119875 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-11 15:14:03.119885 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-11 15:14:03.119899 | orchestrator | 2025-06-11 15:14:03.119908 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-11 15:14:03.119918 | orchestrator | Wednesday 11 June 2025 15:08:40 +0000 (0:00:07.643) 0:03:33.468 ******** 2025-06-11 15:14:03.119952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.119983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120061 | orchestrator | 2025-06-11 15:14:03.120071 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-11 15:14:03.120081 | orchestrator | Wednesday 11 June 2025 15:08:41 +0000 (0:00:01.319) 0:03:34.788 ******** 2025-06-11 15:14:03.120090 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.120100 | orchestrator | 2025-06-11 15:14:03.120109 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-11 15:14:03.120119 | orchestrator | Wednesday 11 June 2025 15:08:41 +0000 (0:00:00.176) 0:03:34.965 ******** 2025-06-11 15:14:03.120183 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.120195 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.120204 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.120213 | orchestrator | 2025-06-11 15:14:03.120223 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-11 15:14:03.120232 | orchestrator | Wednesday 11 June 2025 15:08:42 +0000 (0:00:00.539) 0:03:35.505 ******** 2025-06-11 15:14:03.120241 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-11 15:14:03.120250 | orchestrator | 2025-06-11 15:14:03.120260 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-11 15:14:03.120269 | orchestrator | Wednesday 11 June 2025 15:08:42 +0000 (0:00:00.652) 0:03:36.158 ******** 2025-06-11 15:14:03.120278 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.120288 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.120297 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.120306 | orchestrator | 2025-06-11 15:14:03.120316 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-11 15:14:03.120325 | orchestrator | Wednesday 11 June 2025 15:08:43 +0000 (0:00:00.311) 0:03:36.470 ******** 2025-06-11 15:14:03.120334 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.120343 | orchestrator | 2025-06-11 15:14:03.120353 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-11 15:14:03.120362 | orchestrator | Wednesday 11 June 2025 15:08:43 +0000 (0:00:00.684) 0:03:37.155 ******** 2025-06-11 15:14:03.120387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120482 | orchestrator | 2025-06-11 15:14:03.120491 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-11 15:14:03.120501 | orchestrator | Wednesday 11 June 2025 15:08:46 +0000 (0:00:02.279) 0:03:39.435 ******** 2025-06-11 15:14:03.120517 | orchestrator | skip2025-06-11 15:14:03 | INFO  | Task 7f20d24d-2818-4a4b-aa7a-b47f7fae8ed7 is in state SUCCESS 2025-06-11 15:14:03.120528 | orchestrator | ping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120549 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.120559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120595 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.120613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120633 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.120643 | orchestrator | 2025-06-11 15:14:03.120652 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-11 15:14:03.120662 | orchestrator | Wednesday 11 June 2025 15:08:46 +0000 (0:00:00.592) 0:03:40.027 ******** 2025-06-11 15:14:03.120672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120697 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.120712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120729 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.120738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.120751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.120759 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.120767 | orchestrator | 2025-06-11 15:14:03.120775 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-11 15:14:03.120783 | orchestrator | Wednesday 11 June 2025 15:08:47 +0000 (0:00:00.901) 0:03:40.929 ******** 2025-06-11 15:14:03.120801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120867 | orchestrator | 2025-06-11 15:14:03.120875 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-11 15:14:03.120883 | orchestrator | Wednesday 11 June 2025 15:08:49 +0000 (0:00:02.323) 0:03:43.253 ******** 2025-06-11 15:14:03.120891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.120933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.120965 | orchestrator | 2025-06-11 15:14:03.120972 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-11 15:14:03.120980 | orchestrator | Wednesday 11 June 2025 15:08:54 +0000 (0:00:05.016) 0:03:48.269 ******** 2025-06-11 15:14:03.120992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.121001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.121014 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.121022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.121036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.121045 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.121053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-11 15:14:03.121065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.121073 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.121081 | orchestrator | 2025-06-11 15:14:03.121089 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-11 15:14:03.121096 | orchestrator | Wednesday 11 June 2025 15:08:55 +0000 (0:00:00.574) 0:03:48.844 ******** 2025-06-11 15:14:03.121104 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.121112 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.121119 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.121143 | orchestrator | 2025-06-11 15:14:03.121151 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-11 15:14:03.121164 | orchestrator | Wednesday 11 June 2025 15:08:57 +0000 (0:00:01.936) 0:03:50.781 ******** 2025-06-11 15:14:03.121172 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.121180 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.121188 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.121195 | orchestrator | 2025-06-11 15:14:03.121203 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-11 15:14:03.121211 | orchestrator | Wednesday 11 June 2025 15:08:57 +0000 (0:00:00.318) 0:03:51.099 ******** 2025-06-11 15:14:03.121219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.121234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.121247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-11 15:14:03.121871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.121920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.121934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.121955 | orchestrator | 2025-06-11 15:14:03.121969 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-11 15:14:03.121983 | orchestrator | Wednesday 11 June 2025 15:08:59 +0000 (0:00:01.780) 0:03:52.879 ******** 2025-06-11 15:14:03.121997 | orchestrator | 2025-06-11 15:14:03.122010 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-11 15:14:03.122073 | orchestrator | Wednesday 11 June 2025 15:08:59 +0000 (0:00:00.132) 0:03:53.012 ******** 2025-06-11 15:14:03.122087 | orchestrator | 2025-06-11 15:14:03.122103 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-11 15:14:03.122113 | orchestrator | Wednesday 11 June 2025 15:08:59 +0000 (0:00:00.179) 0:03:53.191 ******** 2025-06-11 15:14:03.122121 | orchestrator | 2025-06-11 15:14:03.122153 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-11 15:14:03.122161 | orchestrator | Wednesday 11 June 2025 15:09:00 +0000 (0:00:00.259) 0:03:53.451 ******** 2025-06-11 15:14:03.122169 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.122177 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.122185 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.122193 | orchestrator | 2025-06-11 15:14:03.122201 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-11 15:14:03.122209 | orchestrator | Wednesday 11 June 2025 15:09:23 +0000 (0:00:23.444) 0:04:16.895 ******** 2025-06-11 15:14:03.122216 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.122224 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.122232 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.122239 | orchestrator | 2025-06-11 15:14:03.122247 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-11 15:14:03.122255 | orchestrator | 2025-06-11 15:14:03.122263 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-11 15:14:03.122271 | orchestrator | Wednesday 11 June 2025 15:09:33 +0000 (0:00:10.276) 0:04:27.171 ******** 2025-06-11 15:14:03.122279 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.122287 | orchestrator | 2025-06-11 15:14:03.122295 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-11 15:14:03.122303 | orchestrator | Wednesday 11 June 2025 15:09:35 +0000 (0:00:01.253) 0:04:28.425 ******** 2025-06-11 15:14:03.122311 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.122319 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.122326 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.122334 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.122351 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.122369 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.122382 | orchestrator | 2025-06-11 15:14:03.122395 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-11 15:14:03.122408 | orchestrator | Wednesday 11 June 2025 15:09:35 +0000 (0:00:00.754) 0:04:29.179 ******** 2025-06-11 15:14:03.122422 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.122430 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.122438 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.122446 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:14:03.122454 | orchestrator | 2025-06-11 15:14:03.122461 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-11 15:14:03.122469 | orchestrator | Wednesday 11 June 2025 15:09:36 +0000 (0:00:00.971) 0:04:30.150 ******** 2025-06-11 15:14:03.122478 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-11 15:14:03.122486 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-11 15:14:03.122496 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-11 15:14:03.122510 | orchestrator | 2025-06-11 15:14:03.122533 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-11 15:14:03.122542 | orchestrator | Wednesday 11 June 2025 15:09:37 +0000 (0:00:00.629) 0:04:30.780 ******** 2025-06-11 15:14:03.122550 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-11 15:14:03.122557 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-11 15:14:03.122565 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-11 15:14:03.122573 | orchestrator | 2025-06-11 15:14:03.122581 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-11 15:14:03.122589 | orchestrator | Wednesday 11 June 2025 15:09:38 +0000 (0:00:01.190) 0:04:31.970 ******** 2025-06-11 15:14:03.122597 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-11 15:14:03.122605 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.122612 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-11 15:14:03.122620 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.122628 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-11 15:14:03.122636 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.122643 | orchestrator | 2025-06-11 15:14:03.122651 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-11 15:14:03.122659 | orchestrator | Wednesday 11 June 2025 15:09:39 +0000 (0:00:00.666) 0:04:32.636 ******** 2025-06-11 15:14:03.122667 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 15:14:03.122674 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 15:14:03.122682 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.122690 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 15:14:03.122698 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 15:14:03.122705 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.122713 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-11 15:14:03.122721 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-11 15:14:03.122729 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-11 15:14:03.122736 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-11 15:14:03.122744 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.122752 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-11 15:14:03.122760 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-11 15:14:03.122768 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-11 15:14:03.122782 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-11 15:14:03.122790 | orchestrator | 2025-06-11 15:14:03.122798 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-11 15:14:03.122805 | orchestrator | Wednesday 11 June 2025 15:09:40 +0000 (0:00:01.027) 0:04:33.664 ******** 2025-06-11 15:14:03.122813 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.122821 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.122829 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.122836 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.122844 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.122852 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.122860 | orchestrator | 2025-06-11 15:14:03.122867 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-11 15:14:03.122875 | orchestrator | Wednesday 11 June 2025 15:09:41 +0000 (0:00:01.341) 0:04:35.006 ******** 2025-06-11 15:14:03.122883 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.122891 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.122899 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.122906 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.122914 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.122922 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.122929 | orchestrator | 2025-06-11 15:14:03.122937 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-11 15:14:03.122945 | orchestrator | Wednesday 11 June 2025 15:09:43 +0000 (0:00:01.565) 0:04:36.572 ******** 2025-06-11 15:14:03.122958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.122974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.122983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.122997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123203 | orchestrator | 2025-06-11 15:14:03.123213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-11 15:14:03.123221 | orchestrator | Wednesday 11 June 2025 15:09:45 +0000 (0:00:02.464) 0:04:39.036 ******** 2025-06-11 15:14:03.123229 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:14:03.123238 | orchestrator | 2025-06-11 15:14:03.123246 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-11 15:14:03.123254 | orchestrator | Wednesday 11 June 2025 15:09:46 +0000 (0:00:01.196) 0:04:40.232 ******** 2025-06-11 15:14:03.123262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.123418 | orchestrator | 2025-06-11 15:14:03.123426 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-11 15:14:03.123434 | orchestrator | Wednesday 11 June 2025 15:09:50 +0000 (0:00:03.362) 0:04:43.595 ******** 2025-06-11 15:14:03.123449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123508 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.123517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123554 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.123563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123587 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.123596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123616 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.123628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123650 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.123658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123674 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.123682 | orchestrator | 2025-06-11 15:14:03.123689 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-11 15:14:03.123697 | orchestrator | Wednesday 11 June 2025 15:09:51 +0000 (0:00:01.780) 0:04:45.376 ******** 2025-06-11 15:14:03.123706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123763 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.123770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123784 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.123794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.123809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.123821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123829 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.123836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123849 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.123856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123874 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.123884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.123895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.123903 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.123909 | orchestrator | 2025-06-11 15:14:03.123916 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-11 15:14:03.123923 | orchestrator | Wednesday 11 June 2025 15:09:53 +0000 (0:00:01.938) 0:04:47.314 ******** 2025-06-11 15:14:03.123929 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.123936 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.123942 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.123949 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-11 15:14:03.123955 | orchestrator | 2025-06-11 15:14:03.123962 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-11 15:14:03.123969 | orchestrator | Wednesday 11 June 2025 15:09:54 +0000 (0:00:00.885) 0:04:48.200 ******** 2025-06-11 15:14:03.123975 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 15:14:03.123982 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-11 15:14:03.123988 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-11 15:14:03.123995 | orchestrator | 2025-06-11 15:14:03.124001 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-11 15:14:03.124008 | orchestrator | Wednesday 11 June 2025 15:09:55 +0000 (0:00:01.109) 0:04:49.309 ******** 2025-06-11 15:14:03.124014 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 15:14:03.124021 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-11 15:14:03.124027 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-11 15:14:03.124034 | orchestrator | 2025-06-11 15:14:03.124040 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-11 15:14:03.124047 | orchestrator | Wednesday 11 June 2025 15:09:56 +0000 (0:00:00.904) 0:04:50.214 ******** 2025-06-11 15:14:03.124053 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:14:03.124060 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:14:03.124067 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:14:03.124073 | orchestrator | 2025-06-11 15:14:03.124080 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-11 15:14:03.124086 | orchestrator | Wednesday 11 June 2025 15:09:57 +0000 (0:00:00.487) 0:04:50.701 ******** 2025-06-11 15:14:03.124093 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:14:03.124099 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:14:03.124105 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:14:03.124112 | orchestrator | 2025-06-11 15:14:03.124119 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-11 15:14:03.124142 | orchestrator | Wednesday 11 June 2025 15:09:57 +0000 (0:00:00.485) 0:04:51.187 ******** 2025-06-11 15:14:03.124155 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-11 15:14:03.124162 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-11 15:14:03.124169 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-11 15:14:03.124175 | orchestrator | 2025-06-11 15:14:03.124182 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-11 15:14:03.124188 | orchestrator | Wednesday 11 June 2025 15:09:59 +0000 (0:00:01.345) 0:04:52.532 ******** 2025-06-11 15:14:03.124195 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-11 15:14:03.124202 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-11 15:14:03.124208 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-11 15:14:03.124215 | orchestrator | 2025-06-11 15:14:03.124221 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-11 15:14:03.124228 | orchestrator | Wednesday 11 June 2025 15:10:00 +0000 (0:00:01.204) 0:04:53.736 ******** 2025-06-11 15:14:03.124234 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-11 15:14:03.124241 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-11 15:14:03.124248 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-11 15:14:03.124254 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-11 15:14:03.124261 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-11 15:14:03.124267 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-11 15:14:03.124274 | orchestrator | 2025-06-11 15:14:03.124280 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-11 15:14:03.124287 | orchestrator | Wednesday 11 June 2025 15:10:03 +0000 (0:00:03.591) 0:04:57.328 ******** 2025-06-11 15:14:03.124293 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.124300 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.124306 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.124313 | orchestrator | 2025-06-11 15:14:03.124323 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-11 15:14:03.124329 | orchestrator | Wednesday 11 June 2025 15:10:04 +0000 (0:00:00.289) 0:04:57.617 ******** 2025-06-11 15:14:03.124336 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.124342 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.124349 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.124355 | orchestrator | 2025-06-11 15:14:03.124362 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-11 15:14:03.124368 | orchestrator | Wednesday 11 June 2025 15:10:04 +0000 (0:00:00.468) 0:04:58.085 ******** 2025-06-11 15:14:03.124375 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.124381 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.124388 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.124394 | orchestrator | 2025-06-11 15:14:03.124401 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-11 15:14:03.124408 | orchestrator | Wednesday 11 June 2025 15:10:05 +0000 (0:00:01.165) 0:04:59.251 ******** 2025-06-11 15:14:03.124418 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-11 15:14:03.124426 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-11 15:14:03.124433 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-11 15:14:03.124439 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-11 15:14:03.124446 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-11 15:14:03.124457 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-11 15:14:03.124463 | orchestrator | 2025-06-11 15:14:03.124470 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-11 15:14:03.124477 | orchestrator | Wednesday 11 June 2025 15:10:09 +0000 (0:00:03.216) 0:05:02.468 ******** 2025-06-11 15:14:03.124483 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 15:14:03.124490 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 15:14:03.124496 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 15:14:03.124503 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-11 15:14:03.124509 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.124516 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-11 15:14:03.124522 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.124529 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-11 15:14:03.124535 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.124542 | orchestrator | 2025-06-11 15:14:03.124548 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-11 15:14:03.124555 | orchestrator | Wednesday 11 June 2025 15:10:12 +0000 (0:00:03.268) 0:05:05.736 ******** 2025-06-11 15:14:03.124561 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.124568 | orchestrator | 2025-06-11 15:14:03.124574 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-11 15:14:03.124581 | orchestrator | Wednesday 11 June 2025 15:10:12 +0000 (0:00:00.132) 0:05:05.869 ******** 2025-06-11 15:14:03.124587 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.124594 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.124600 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.124607 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.124613 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.124620 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.124626 | orchestrator | 2025-06-11 15:14:03.124633 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-11 15:14:03.124640 | orchestrator | Wednesday 11 June 2025 15:10:13 +0000 (0:00:00.746) 0:05:06.615 ******** 2025-06-11 15:14:03.124646 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-11 15:14:03.124653 | orchestrator | 2025-06-11 15:14:03.124659 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-11 15:14:03.124666 | orchestrator | Wednesday 11 June 2025 15:10:13 +0000 (0:00:00.688) 0:05:07.304 ******** 2025-06-11 15:14:03.124672 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.124678 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.124685 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.124691 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.124698 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.124704 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.124711 | orchestrator | 2025-06-11 15:14:03.124717 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-11 15:14:03.124724 | orchestrator | Wednesday 11 June 2025 15:10:14 +0000 (0:00:00.589) 0:05:07.893 ******** 2025-06-11 15:14:03.124734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.124933 | orchestrator | 2025-06-11 15:14:03.124940 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-11 15:14:03.124946 | orchestrator | Wednesday 11 June 2025 15:10:18 +0000 (0:00:03.737) 0:05:11.631 ******** 2025-06-11 15:14:03.124953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.124960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.124967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.124983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.124995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.125002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.125009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.125088 | orchestrator | 2025-06-11 15:14:03.125094 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-11 15:14:03.125101 | orchestrator | Wednesday 11 June 2025 15:10:24 +0000 (0:00:06.033) 0:05:17.664 ******** 2025-06-11 15:14:03.125108 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.125115 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.125121 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.125147 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125155 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125161 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125168 | orchestrator | 2025-06-11 15:14:03.125178 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-11 15:14:03.125185 | orchestrator | Wednesday 11 June 2025 15:10:25 +0000 (0:00:01.599) 0:05:19.263 ******** 2025-06-11 15:14:03.125191 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-11 15:14:03.125198 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-11 15:14:03.125204 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-11 15:14:03.125211 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-11 15:14:03.125218 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-11 15:14:03.125224 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125234 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-11 15:14:03.125241 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-11 15:14:03.125248 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-11 15:14:03.125255 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125261 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-11 15:14:03.125268 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125275 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-11 15:14:03.125281 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-11 15:14:03.125288 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-11 15:14:03.125295 | orchestrator | 2025-06-11 15:14:03.125301 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-11 15:14:03.125308 | orchestrator | Wednesday 11 June 2025 15:10:29 +0000 (0:00:03.550) 0:05:22.814 ******** 2025-06-11 15:14:03.125314 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.125321 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.125327 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.125334 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125340 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125347 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125353 | orchestrator | 2025-06-11 15:14:03.125360 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-11 15:14:03.125366 | orchestrator | Wednesday 11 June 2025 15:10:30 +0000 (0:00:00.771) 0:05:23.585 ******** 2025-06-11 15:14:03.125373 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-11 15:14:03.125380 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-11 15:14:03.125391 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-11 15:14:03.125398 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-11 15:14:03.125404 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-11 15:14:03.125411 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-11 15:14:03.125418 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125424 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125431 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125437 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125444 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125450 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125457 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125464 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-11 15:14:03.125470 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125477 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125484 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125490 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125497 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125507 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125514 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-11 15:14:03.125520 | orchestrator | 2025-06-11 15:14:03.125527 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-11 15:14:03.125534 | orchestrator | Wednesday 11 June 2025 15:10:35 +0000 (0:00:04.894) 0:05:28.479 ******** 2025-06-11 15:14:03.125540 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 15:14:03.125547 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 15:14:03.125553 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-11 15:14:03.125560 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-11 15:14:03.125570 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:14:03.125577 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:14:03.125584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-11 15:14:03.125590 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-11 15:14:03.125597 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-11 15:14:03.125603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 15:14:03.125615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 15:14:03.125622 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-11 15:14:03.125629 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-11 15:14:03.125635 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125642 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-11 15:14:03.125649 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125655 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:14:03.125662 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:14:03.125668 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-11 15:14:03.125675 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125682 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-11 15:14:03.125688 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:14:03.125695 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:14:03.125701 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-11 15:14:03.125708 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:14:03.125714 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:14:03.125721 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-11 15:14:03.125728 | orchestrator | 2025-06-11 15:14:03.125734 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-11 15:14:03.125741 | orchestrator | Wednesday 11 June 2025 15:10:41 +0000 (0:00:06.784) 0:05:35.264 ******** 2025-06-11 15:14:03.125748 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.125754 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.125761 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.125767 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125774 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125780 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125787 | orchestrator | 2025-06-11 15:14:03.125793 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-11 15:14:03.125800 | orchestrator | Wednesday 11 June 2025 15:10:42 +0000 (0:00:00.544) 0:05:35.808 ******** 2025-06-11 15:14:03.125806 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.125813 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.125819 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.125826 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125832 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125839 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125845 | orchestrator | 2025-06-11 15:14:03.125852 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-11 15:14:03.125859 | orchestrator | Wednesday 11 June 2025 15:10:43 +0000 (0:00:00.761) 0:05:36.569 ******** 2025-06-11 15:14:03.125865 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.125872 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.125878 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.125885 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.125891 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.125898 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.125904 | orchestrator | 2025-06-11 15:14:03.125911 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-11 15:14:03.125917 | orchestrator | Wednesday 11 June 2025 15:10:44 +0000 (0:00:01.733) 0:05:38.303 ******** 2025-06-11 15:14:03.125939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.125947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.125954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.125961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.125968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.125979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.125990 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.125997 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.126007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-11 15:14:03.126037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-11 15:14:03.126044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.126056 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.126067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.126078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.126095 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.126112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.126171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.126180 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.126188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-11 15:14:03.126195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-11 15:14:03.126201 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.126208 | orchestrator | 2025-06-11 15:14:03.126215 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-11 15:14:03.126221 | orchestrator | Wednesday 11 June 2025 15:10:46 +0000 (0:00:01.758) 0:05:40.061 ******** 2025-06-11 15:14:03.126228 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-11 15:14:03.126235 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126241 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.126247 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-11 15:14:03.126253 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126259 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.126265 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-11 15:14:03.126271 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126277 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.126283 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-11 15:14:03.126295 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126301 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.126307 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-11 15:14:03.126313 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126319 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.126325 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-11 15:14:03.126331 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-11 15:14:03.126337 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.126343 | orchestrator | 2025-06-11 15:14:03.126349 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-11 15:14:03.126355 | orchestrator | Wednesday 11 June 2025 15:10:47 +0000 (0:00:00.611) 0:05:40.672 ******** 2025-06-11 15:14:03.126365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-11 15:14:03.126492 | orchestrator | 2025-06-11 15:14:03.126498 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-11 15:14:03.126505 | orchestrator | Wednesday 11 June 2025 15:10:50 +0000 (0:00:02.957) 0:05:43.630 ******** 2025-06-11 15:14:03.126511 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.126517 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.126523 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.126529 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.126535 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.126541 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.126547 | orchestrator | 2025-06-11 15:14:03.126556 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126563 | orchestrator | Wednesday 11 June 2025 15:10:50 +0000 (0:00:00.559) 0:05:44.190 ******** 2025-06-11 15:14:03.126569 | orchestrator | 2025-06-11 15:14:03.126575 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126581 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.299) 0:05:44.489 ******** 2025-06-11 15:14:03.126587 | orchestrator | 2025-06-11 15:14:03.126593 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126600 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.128) 0:05:44.617 ******** 2025-06-11 15:14:03.126606 | orchestrator | 2025-06-11 15:14:03.126612 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126618 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.129) 0:05:44.747 ******** 2025-06-11 15:14:03.126624 | orchestrator | 2025-06-11 15:14:03.126630 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126637 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.130) 0:05:44.877 ******** 2025-06-11 15:14:03.126643 | orchestrator | 2025-06-11 15:14:03.126649 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-11 15:14:03.126655 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.127) 0:05:45.005 ******** 2025-06-11 15:14:03.126661 | orchestrator | 2025-06-11 15:14:03.126667 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-11 15:14:03.126673 | orchestrator | Wednesday 11 June 2025 15:10:51 +0000 (0:00:00.127) 0:05:45.132 ******** 2025-06-11 15:14:03.126679 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.126685 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.126691 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.126697 | orchestrator | 2025-06-11 15:14:03.126704 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-11 15:14:03.126710 | orchestrator | Wednesday 11 June 2025 15:11:03 +0000 (0:00:11.822) 0:05:56.955 ******** 2025-06-11 15:14:03.126716 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.126722 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.126728 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.126734 | orchestrator | 2025-06-11 15:14:03.126740 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-11 15:14:03.126746 | orchestrator | Wednesday 11 June 2025 15:11:21 +0000 (0:00:18.118) 0:06:15.073 ******** 2025-06-11 15:14:03.126753 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.126759 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.126765 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.126771 | orchestrator | 2025-06-11 15:14:03.126777 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-11 15:14:03.126783 | orchestrator | Wednesday 11 June 2025 15:11:42 +0000 (0:00:20.879) 0:06:35.953 ******** 2025-06-11 15:14:03.126789 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.126795 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.126802 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.126808 | orchestrator | 2025-06-11 15:14:03.126814 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-11 15:14:03.126823 | orchestrator | Wednesday 11 June 2025 15:12:26 +0000 (0:00:44.406) 0:07:20.359 ******** 2025-06-11 15:14:03.126829 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.126835 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.126841 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.126847 | orchestrator | 2025-06-11 15:14:03.126853 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-11 15:14:03.126860 | orchestrator | Wednesday 11 June 2025 15:12:28 +0000 (0:00:01.085) 0:07:21.444 ******** 2025-06-11 15:14:03.126866 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.126872 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.126883 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.126890 | orchestrator | 2025-06-11 15:14:03.126896 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-11 15:14:03.126902 | orchestrator | Wednesday 11 June 2025 15:12:28 +0000 (0:00:00.777) 0:07:22.222 ******** 2025-06-11 15:14:03.126908 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:14:03.126914 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:14:03.126920 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:14:03.126926 | orchestrator | 2025-06-11 15:14:03.126935 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-11 15:14:03.126942 | orchestrator | Wednesday 11 June 2025 15:12:56 +0000 (0:00:27.752) 0:07:49.974 ******** 2025-06-11 15:14:03.126948 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.126954 | orchestrator | 2025-06-11 15:14:03.126960 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-11 15:14:03.126967 | orchestrator | Wednesday 11 June 2025 15:12:56 +0000 (0:00:00.131) 0:07:50.106 ******** 2025-06-11 15:14:03.126973 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.126979 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.126985 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.126991 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.126997 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127003 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-11 15:14:03.127010 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-11 15:14:03.127016 | orchestrator | 2025-06-11 15:14:03.127022 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-11 15:14:03.127028 | orchestrator | Wednesday 11 June 2025 15:13:18 +0000 (0:00:22.209) 0:08:12.315 ******** 2025-06-11 15:14:03.127034 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.127041 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.127047 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.127053 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127059 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127065 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127071 | orchestrator | 2025-06-11 15:14:03.127077 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-11 15:14:03.127083 | orchestrator | Wednesday 11 June 2025 15:13:26 +0000 (0:00:07.872) 0:08:20.188 ******** 2025-06-11 15:14:03.127090 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127096 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.127102 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127108 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.127114 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127120 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-11 15:14:03.127142 | orchestrator | 2025-06-11 15:14:03.127151 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-11 15:14:03.127157 | orchestrator | Wednesday 11 June 2025 15:13:30 +0000 (0:00:03.628) 0:08:23.816 ******** 2025-06-11 15:14:03.127164 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-11 15:14:03.127170 | orchestrator | 2025-06-11 15:14:03.127176 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-11 15:14:03.127182 | orchestrator | Wednesday 11 June 2025 15:13:42 +0000 (0:00:11.843) 0:08:35.659 ******** 2025-06-11 15:14:03.127188 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-11 15:14:03.127194 | orchestrator | 2025-06-11 15:14:03.127201 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-11 15:14:03.127207 | orchestrator | Wednesday 11 June 2025 15:13:43 +0000 (0:00:01.309) 0:08:36.969 ******** 2025-06-11 15:14:03.127213 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.127219 | orchestrator | 2025-06-11 15:14:03.127230 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-11 15:14:03.127237 | orchestrator | Wednesday 11 June 2025 15:13:44 +0000 (0:00:01.316) 0:08:38.285 ******** 2025-06-11 15:14:03.127243 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-11 15:14:03.127249 | orchestrator | 2025-06-11 15:14:03.127255 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-11 15:14:03.127261 | orchestrator | Wednesday 11 June 2025 15:13:55 +0000 (0:00:10.238) 0:08:48.524 ******** 2025-06-11 15:14:03.127267 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:14:03.127274 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:14:03.127280 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:14:03.127286 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:14:03.127292 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:14:03.127298 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:14:03.127304 | orchestrator | 2025-06-11 15:14:03.127310 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-11 15:14:03.127317 | orchestrator | 2025-06-11 15:14:03.127323 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-11 15:14:03.127329 | orchestrator | Wednesday 11 June 2025 15:13:56 +0000 (0:00:01.695) 0:08:50.220 ******** 2025-06-11 15:14:03.127335 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:14:03.127341 | orchestrator | changed: [testbed-node-1] 2025-06-11 15:14:03.127347 | orchestrator | changed: [testbed-node-2] 2025-06-11 15:14:03.127354 | orchestrator | 2025-06-11 15:14:03.127360 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-11 15:14:03.127366 | orchestrator | 2025-06-11 15:14:03.127372 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-11 15:14:03.127384 | orchestrator | Wednesday 11 June 2025 15:13:57 +0000 (0:00:01.090) 0:08:51.311 ******** 2025-06-11 15:14:03.127390 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127396 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127402 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127408 | orchestrator | 2025-06-11 15:14:03.127414 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-11 15:14:03.127421 | orchestrator | 2025-06-11 15:14:03.127427 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-11 15:14:03.127433 | orchestrator | Wednesday 11 June 2025 15:13:58 +0000 (0:00:00.496) 0:08:51.807 ******** 2025-06-11 15:14:03.127439 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-11 15:14:03.127446 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-11 15:14:03.127452 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127458 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-11 15:14:03.127464 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-11 15:14:03.127474 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127480 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:14:03.127487 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-11 15:14:03.127493 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-11 15:14:03.127499 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127505 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-11 15:14:03.127511 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-11 15:14:03.127517 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127524 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:14:03.127530 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-11 15:14:03.127536 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-11 15:14:03.127542 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127548 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-11 15:14:03.127559 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-11 15:14:03.127565 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127571 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:14:03.127578 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-11 15:14:03.127584 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-11 15:14:03.127590 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127596 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-11 15:14:03.127602 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-11 15:14:03.127608 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127614 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127621 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-11 15:14:03.127627 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-11 15:14:03.127633 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127639 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-11 15:14:03.127645 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-11 15:14:03.127652 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127658 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127664 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-11 15:14:03.127670 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-11 15:14:03.127676 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-11 15:14:03.127683 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-11 15:14:03.127689 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-11 15:14:03.127695 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-11 15:14:03.127701 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127707 | orchestrator | 2025-06-11 15:14:03.127713 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-11 15:14:03.127720 | orchestrator | 2025-06-11 15:14:03.127726 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-11 15:14:03.127732 | orchestrator | Wednesday 11 June 2025 15:13:59 +0000 (0:00:01.328) 0:08:53.136 ******** 2025-06-11 15:14:03.127738 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-11 15:14:03.127744 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-11 15:14:03.127750 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127756 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-11 15:14:03.127763 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-11 15:14:03.127769 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127775 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-11 15:14:03.127781 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-11 15:14:03.127787 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127793 | orchestrator | 2025-06-11 15:14:03.127799 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-11 15:14:03.127805 | orchestrator | 2025-06-11 15:14:03.127811 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-11 15:14:03.127818 | orchestrator | Wednesday 11 June 2025 15:14:00 +0000 (0:00:00.697) 0:08:53.834 ******** 2025-06-11 15:14:03.127824 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127830 | orchestrator | 2025-06-11 15:14:03.127839 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-11 15:14:03.127846 | orchestrator | 2025-06-11 15:14:03.127852 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-11 15:14:03.127862 | orchestrator | Wednesday 11 June 2025 15:14:01 +0000 (0:00:00.687) 0:08:54.521 ******** 2025-06-11 15:14:03.127869 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:14:03.127875 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:14:03.127881 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:14:03.127887 | orchestrator | 2025-06-11 15:14:03.127893 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:14:03.127900 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:14:03.127906 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-11 15:14:03.127916 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-11 15:14:03.127922 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-11 15:14:03.127992 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-11 15:14:03.127999 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-11 15:14:03.128005 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-11 15:14:03.128011 | orchestrator | 2025-06-11 15:14:03.128017 | orchestrator | 2025-06-11 15:14:03.128023 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:14:03.128030 | orchestrator | Wednesday 11 June 2025 15:14:01 +0000 (0:00:00.475) 0:08:54.997 ******** 2025-06-11 15:14:03.128036 | orchestrator | =============================================================================== 2025-06-11 15:14:03.128042 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 49.54s 2025-06-11 15:14:03.128048 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.41s 2025-06-11 15:14:03.128054 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.75s 2025-06-11 15:14:03.128061 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.44s 2025-06-11 15:14:03.128067 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.21s 2025-06-11 15:14:03.128073 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.50s 2025-06-11 15:14:03.128079 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.88s 2025-06-11 15:14:03.128085 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.86s 2025-06-11 15:14:03.128091 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.12s 2025-06-11 15:14:03.128097 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.86s 2025-06-11 15:14:03.128103 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.07s 2025-06-11 15:14:03.128109 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.29s 2025-06-11 15:14:03.128116 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.84s 2025-06-11 15:14:03.128122 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.82s 2025-06-11 15:14:03.128143 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.40s 2025-06-11 15:14:03.128150 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.28s 2025-06-11 15:14:03.128156 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.24s 2025-06-11 15:14:03.128162 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.27s 2025-06-11 15:14:03.128173 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.87s 2025-06-11 15:14:03.128179 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.64s 2025-06-11 15:14:03.128186 | orchestrator | 2025-06-11 15:14:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:06.160611 | orchestrator | 2025-06-11 15:14:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:09.199739 | orchestrator | 2025-06-11 15:14:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:12.246687 | orchestrator | 2025-06-11 15:14:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:15.295439 | orchestrator | 2025-06-11 15:14:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:18.337070 | orchestrator | 2025-06-11 15:14:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:21.375804 | orchestrator | 2025-06-11 15:14:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:24.426364 | orchestrator | 2025-06-11 15:14:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:27.464637 | orchestrator | 2025-06-11 15:14:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:30.491427 | orchestrator | 2025-06-11 15:14:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:33.551166 | orchestrator | 2025-06-11 15:14:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:36.597381 | orchestrator | 2025-06-11 15:14:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:39.639777 | orchestrator | 2025-06-11 15:14:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:42.680981 | orchestrator | 2025-06-11 15:14:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:45.721542 | orchestrator | 2025-06-11 15:14:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:48.755697 | orchestrator | 2025-06-11 15:14:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:51.793570 | orchestrator | 2025-06-11 15:14:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:54.836824 | orchestrator | 2025-06-11 15:14:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:14:57.883058 | orchestrator | 2025-06-11 15:14:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:15:00.920123 | orchestrator | 2025-06-11 15:15:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-11 15:15:03.960508 | orchestrator | 2025-06-11 15:15:04.269274 | orchestrator | 2025-06-11 15:15:04.272679 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Jun 11 15:15:04 UTC 2025 2025-06-11 15:15:04.272730 | orchestrator | 2025-06-11 15:15:04.656182 | orchestrator | ok: Runtime: 0:35:07.508724 2025-06-11 15:15:04.899858 | 2025-06-11 15:15:04.900036 | TASK [Bootstrap services] 2025-06-11 15:15:05.602594 | orchestrator | 2025-06-11 15:15:05.602929 | orchestrator | # BOOTSTRAP 2025-06-11 15:15:05.602959 | orchestrator | 2025-06-11 15:15:05.602973 | orchestrator | + set -e 2025-06-11 15:15:05.602987 | orchestrator | + echo 2025-06-11 15:15:05.603001 | orchestrator | + echo '# BOOTSTRAP' 2025-06-11 15:15:05.603019 | orchestrator | + echo 2025-06-11 15:15:05.603063 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-11 15:15:05.612298 | orchestrator | + set -e 2025-06-11 15:15:05.612367 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-11 15:15:09.515008 | orchestrator | 2025-06-11 15:15:09 | INFO  | It takes a moment until task ed119da7-5955-4b85-9f99-af095b4eac5d (flavor-manager) has been started and output is visible here. 2025-06-11 15:15:18.028904 | orchestrator | 2025-06-11 15:15:13 | INFO  | Flavor SCS-1V-4 created 2025-06-11 15:15:18.029018 | orchestrator | 2025-06-11 15:15:13 | INFO  | Flavor SCS-2V-8 created 2025-06-11 15:15:18.029029 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-4V-16 created 2025-06-11 15:15:18.029037 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-8V-32 created 2025-06-11 15:15:18.029044 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-1V-2 created 2025-06-11 15:15:18.029051 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-2V-4 created 2025-06-11 15:15:18.029057 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-4V-8 created 2025-06-11 15:15:18.029064 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-8V-16 created 2025-06-11 15:15:18.029082 | orchestrator | 2025-06-11 15:15:14 | INFO  | Flavor SCS-16V-32 created 2025-06-11 15:15:18.029089 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-1V-8 created 2025-06-11 15:15:18.029095 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-2V-16 created 2025-06-11 15:15:18.029101 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-4V-32 created 2025-06-11 15:15:18.029107 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-1L-1 created 2025-06-11 15:15:18.029114 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-2V-4-20s created 2025-06-11 15:15:18.029120 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-4V-16-100s created 2025-06-11 15:15:18.029126 | orchestrator | 2025-06-11 15:15:15 | INFO  | Flavor SCS-1V-4-10 created 2025-06-11 15:15:18.029132 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-2V-8-20 created 2025-06-11 15:15:18.029139 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-4V-16-50 created 2025-06-11 15:15:18.029145 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-8V-32-100 created 2025-06-11 15:15:18.029152 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-1V-2-5 created 2025-06-11 15:15:18.029158 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-2V-4-10 created 2025-06-11 15:15:18.029164 | orchestrator | 2025-06-11 15:15:16 | INFO  | Flavor SCS-4V-8-20 created 2025-06-11 15:15:18.029170 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-8V-16-50 created 2025-06-11 15:15:18.029177 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-16V-32-100 created 2025-06-11 15:15:18.029183 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-1V-8-20 created 2025-06-11 15:15:18.029189 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-2V-16-50 created 2025-06-11 15:15:18.029196 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-4V-32-100 created 2025-06-11 15:15:18.029202 | orchestrator | 2025-06-11 15:15:17 | INFO  | Flavor SCS-1L-1-5 created 2025-06-11 15:15:19.991302 | orchestrator | 2025-06-11 15:15:19 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-11 15:15:19.996273 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:15:19.996308 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:15:19.997022 | orchestrator | Registering Redlock._release_script 2025-06-11 15:15:20.057531 | orchestrator | 2025-06-11 15:15:20 | INFO  | Task 03d266c7-25ca-4910-8852-85a74bf8bf76 (bootstrap-basic) was prepared for execution. 2025-06-11 15:15:20.057627 | orchestrator | 2025-06-11 15:15:20 | INFO  | It takes a moment until task 03d266c7-25ca-4910-8852-85a74bf8bf76 (bootstrap-basic) has been started and output is visible here. 2025-06-11 15:16:20.737739 | orchestrator | 2025-06-11 15:16:20.737863 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-11 15:16:20.737881 | orchestrator | 2025-06-11 15:16:20.737894 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-11 15:16:20.737908 | orchestrator | Wednesday 11 June 2025 15:15:24 +0000 (0:00:00.074) 0:00:00.074 ******** 2025-06-11 15:16:20.737921 | orchestrator | ok: [localhost] 2025-06-11 15:16:20.737933 | orchestrator | 2025-06-11 15:16:20.737944 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-11 15:16:20.737955 | orchestrator | Wednesday 11 June 2025 15:15:26 +0000 (0:00:01.838) 0:00:01.912 ******** 2025-06-11 15:16:20.737966 | orchestrator | ok: [localhost] 2025-06-11 15:16:20.737977 | orchestrator | 2025-06-11 15:16:20.737988 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-11 15:16:20.737999 | orchestrator | Wednesday 11 June 2025 15:15:33 +0000 (0:00:07.128) 0:00:09.040 ******** 2025-06-11 15:16:20.738010 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738073 | orchestrator | 2025-06-11 15:16:20.738086 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-11 15:16:20.738097 | orchestrator | Wednesday 11 June 2025 15:15:40 +0000 (0:00:07.110) 0:00:16.150 ******** 2025-06-11 15:16:20.738116 | orchestrator | ok: [localhost] 2025-06-11 15:16:20.738127 | orchestrator | 2025-06-11 15:16:20.738142 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-11 15:16:20.738154 | orchestrator | Wednesday 11 June 2025 15:15:46 +0000 (0:00:06.064) 0:00:22.215 ******** 2025-06-11 15:16:20.738165 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738175 | orchestrator | 2025-06-11 15:16:20.738186 | orchestrator | TASK [Create public network] *************************************************** 2025-06-11 15:16:20.738197 | orchestrator | Wednesday 11 June 2025 15:15:53 +0000 (0:00:07.354) 0:00:29.570 ******** 2025-06-11 15:16:20.738208 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738235 | orchestrator | 2025-06-11 15:16:20.738279 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-11 15:16:20.738292 | orchestrator | Wednesday 11 June 2025 15:16:00 +0000 (0:00:06.907) 0:00:36.478 ******** 2025-06-11 15:16:20.738305 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738317 | orchestrator | 2025-06-11 15:16:20.738330 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-11 15:16:20.738342 | orchestrator | Wednesday 11 June 2025 15:16:08 +0000 (0:00:07.510) 0:00:43.988 ******** 2025-06-11 15:16:20.738386 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738399 | orchestrator | 2025-06-11 15:16:20.738411 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-11 15:16:20.738424 | orchestrator | Wednesday 11 June 2025 15:16:12 +0000 (0:00:04.565) 0:00:48.553 ******** 2025-06-11 15:16:20.738436 | orchestrator | changed: [localhost] 2025-06-11 15:16:20.738448 | orchestrator | 2025-06-11 15:16:20.738536 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-11 15:16:20.738551 | orchestrator | Wednesday 11 June 2025 15:16:16 +0000 (0:00:04.290) 0:00:52.844 ******** 2025-06-11 15:16:20.738564 | orchestrator | ok: [localhost] 2025-06-11 15:16:20.738576 | orchestrator | 2025-06-11 15:16:20.738589 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:16:20.738602 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:16:20.738615 | orchestrator | 2025-06-11 15:16:20.738626 | orchestrator | 2025-06-11 15:16:20.738663 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:16:20.738675 | orchestrator | Wednesday 11 June 2025 15:16:20 +0000 (0:00:03.503) 0:00:56.347 ******** 2025-06-11 15:16:20.738685 | orchestrator | =============================================================================== 2025-06-11 15:16:20.738697 | orchestrator | Set public network to default ------------------------------------------- 7.51s 2025-06-11 15:16:20.738707 | orchestrator | Create volume type local ------------------------------------------------ 7.35s 2025-06-11 15:16:20.738718 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.13s 2025-06-11 15:16:20.738729 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.11s 2025-06-11 15:16:20.738740 | orchestrator | Create public network --------------------------------------------------- 6.91s 2025-06-11 15:16:20.738750 | orchestrator | Get volume type local --------------------------------------------------- 6.06s 2025-06-11 15:16:20.738761 | orchestrator | Create public subnet ---------------------------------------------------- 4.57s 2025-06-11 15:16:20.738772 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.29s 2025-06-11 15:16:20.738782 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2025-06-11 15:16:20.738837 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2025-06-11 15:16:22.841822 | orchestrator | 2025-06-11 15:16:22 | INFO  | It takes a moment until task e14fa2c7-2378-4a56-ae7a-958bc871575f (image-manager) has been started and output is visible here. 2025-06-11 15:17:04.076093 | orchestrator | 2025-06-11 15:16:26 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-11 15:17:04.076212 | orchestrator | 2025-06-11 15:16:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-11 15:17:04.076230 | orchestrator | 2025-06-11 15:16:26 | INFO  | Importing image Cirros 0.6.2 2025-06-11 15:17:04.076242 | orchestrator | 2025-06-11 15:16:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-11 15:17:04.076254 | orchestrator | 2025-06-11 15:16:28 | INFO  | Waiting for image to leave queued state... 2025-06-11 15:17:04.076266 | orchestrator | 2025-06-11 15:16:30 | INFO  | Waiting for import to complete... 2025-06-11 15:17:04.076277 | orchestrator | 2025-06-11 15:16:40 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-11 15:17:04.076291 | orchestrator | 2025-06-11 15:16:40 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-11 15:17:04.076302 | orchestrator | 2025-06-11 15:16:40 | INFO  | Setting internal_version = 0.6.2 2025-06-11 15:17:04.076312 | orchestrator | 2025-06-11 15:16:40 | INFO  | Setting image_original_user = cirros 2025-06-11 15:17:04.076324 | orchestrator | 2025-06-11 15:16:40 | INFO  | Adding tag os:cirros 2025-06-11 15:17:04.076334 | orchestrator | 2025-06-11 15:16:41 | INFO  | Setting property architecture: x86_64 2025-06-11 15:17:04.076345 | orchestrator | 2025-06-11 15:16:41 | INFO  | Setting property hw_disk_bus: scsi 2025-06-11 15:17:04.076356 | orchestrator | 2025-06-11 15:16:41 | INFO  | Setting property hw_rng_model: virtio 2025-06-11 15:17:04.076366 | orchestrator | 2025-06-11 15:16:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-11 15:17:04.076377 | orchestrator | 2025-06-11 15:16:42 | INFO  | Setting property hw_watchdog_action: reset 2025-06-11 15:17:04.076388 | orchestrator | 2025-06-11 15:16:42 | INFO  | Setting property hypervisor_type: qemu 2025-06-11 15:17:04.076399 | orchestrator | 2025-06-11 15:16:42 | INFO  | Setting property os_distro: cirros 2025-06-11 15:17:04.076487 | orchestrator | 2025-06-11 15:16:42 | INFO  | Setting property replace_frequency: never 2025-06-11 15:17:04.076531 | orchestrator | 2025-06-11 15:16:43 | INFO  | Setting property uuid_validity: none 2025-06-11 15:17:04.076543 | orchestrator | 2025-06-11 15:16:43 | INFO  | Setting property provided_until: none 2025-06-11 15:17:04.076558 | orchestrator | 2025-06-11 15:16:43 | INFO  | Setting property image_description: Cirros 2025-06-11 15:17:04.076569 | orchestrator | 2025-06-11 15:16:43 | INFO  | Setting property image_name: Cirros 2025-06-11 15:17:04.076580 | orchestrator | 2025-06-11 15:16:44 | INFO  | Setting property internal_version: 0.6.2 2025-06-11 15:17:04.076591 | orchestrator | 2025-06-11 15:16:44 | INFO  | Setting property image_original_user: cirros 2025-06-11 15:17:04.076604 | orchestrator | 2025-06-11 15:16:44 | INFO  | Setting property os_version: 0.6.2 2025-06-11 15:17:04.076617 | orchestrator | 2025-06-11 15:16:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-11 15:17:04.076631 | orchestrator | 2025-06-11 15:16:44 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-11 15:17:04.076644 | orchestrator | 2025-06-11 15:16:45 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-11 15:17:04.076657 | orchestrator | 2025-06-11 15:16:45 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-11 15:17:04.076670 | orchestrator | 2025-06-11 15:16:45 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-11 15:17:04.076682 | orchestrator | 2025-06-11 15:16:45 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-11 15:17:04.076695 | orchestrator | 2025-06-11 15:16:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-11 15:17:04.076707 | orchestrator | 2025-06-11 15:16:45 | INFO  | Importing image Cirros 0.6.3 2025-06-11 15:17:04.076720 | orchestrator | 2025-06-11 15:16:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-11 15:17:04.076733 | orchestrator | 2025-06-11 15:16:46 | INFO  | Waiting for image to leave queued state... 2025-06-11 15:17:04.076745 | orchestrator | 2025-06-11 15:16:48 | INFO  | Waiting for import to complete... 2025-06-11 15:17:04.076758 | orchestrator | 2025-06-11 15:16:59 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-11 15:17:04.076790 | orchestrator | 2025-06-11 15:16:59 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-11 15:17:04.076804 | orchestrator | 2025-06-11 15:16:59 | INFO  | Setting internal_version = 0.6.3 2025-06-11 15:17:04.076817 | orchestrator | 2025-06-11 15:16:59 | INFO  | Setting image_original_user = cirros 2025-06-11 15:17:04.076830 | orchestrator | 2025-06-11 15:16:59 | INFO  | Adding tag os:cirros 2025-06-11 15:17:04.076842 | orchestrator | 2025-06-11 15:16:59 | INFO  | Setting property architecture: x86_64 2025-06-11 15:17:04.076855 | orchestrator | 2025-06-11 15:16:59 | INFO  | Setting property hw_disk_bus: scsi 2025-06-11 15:17:04.076867 | orchestrator | 2025-06-11 15:17:00 | INFO  | Setting property hw_rng_model: virtio 2025-06-11 15:17:04.076879 | orchestrator | 2025-06-11 15:17:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-11 15:17:04.076892 | orchestrator | 2025-06-11 15:17:00 | INFO  | Setting property hw_watchdog_action: reset 2025-06-11 15:17:04.076905 | orchestrator | 2025-06-11 15:17:00 | INFO  | Setting property hypervisor_type: qemu 2025-06-11 15:17:04.076917 | orchestrator | 2025-06-11 15:17:00 | INFO  | Setting property os_distro: cirros 2025-06-11 15:17:04.076930 | orchestrator | 2025-06-11 15:17:01 | INFO  | Setting property replace_frequency: never 2025-06-11 15:17:04.076951 | orchestrator | 2025-06-11 15:17:01 | INFO  | Setting property uuid_validity: none 2025-06-11 15:17:04.076962 | orchestrator | 2025-06-11 15:17:01 | INFO  | Setting property provided_until: none 2025-06-11 15:17:04.076972 | orchestrator | 2025-06-11 15:17:01 | INFO  | Setting property image_description: Cirros 2025-06-11 15:17:04.076983 | orchestrator | 2025-06-11 15:17:01 | INFO  | Setting property image_name: Cirros 2025-06-11 15:17:04.076994 | orchestrator | 2025-06-11 15:17:02 | INFO  | Setting property internal_version: 0.6.3 2025-06-11 15:17:04.077004 | orchestrator | 2025-06-11 15:17:02 | INFO  | Setting property image_original_user: cirros 2025-06-11 15:17:04.077021 | orchestrator | 2025-06-11 15:17:02 | INFO  | Setting property os_version: 0.6.3 2025-06-11 15:17:04.077032 | orchestrator | 2025-06-11 15:17:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-11 15:17:04.077042 | orchestrator | 2025-06-11 15:17:03 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-11 15:17:04.077053 | orchestrator | 2025-06-11 15:17:03 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-11 15:17:04.077064 | orchestrator | 2025-06-11 15:17:03 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-11 15:17:04.077074 | orchestrator | 2025-06-11 15:17:03 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-11 15:17:04.322785 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-11 15:17:06.333922 | orchestrator | 2025-06-11 15:17:06 | INFO  | date: 2025-06-11 2025-06-11 15:17:06.334117 | orchestrator | 2025-06-11 15:17:06 | INFO  | image: octavia-amphora-haproxy-2024.2.20250611.qcow2 2025-06-11 15:17:06.334147 | orchestrator | 2025-06-11 15:17:06 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250611.qcow2 2025-06-11 15:17:06.334197 | orchestrator | 2025-06-11 15:17:06 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250611.qcow2.CHECKSUM 2025-06-11 15:17:06.384792 | orchestrator | 2025-06-11 15:17:06 | INFO  | checksum: 94eeab448b9f46ba34f5d8df029ee1c2395a284bd37cd8d801aaa75ab39d5a9f 2025-06-11 15:17:06.454291 | orchestrator | 2025-06-11 15:17:06 | INFO  | It takes a moment until task 3df48a62-b064-47dd-9118-c79a5795aa4e (image-manager) has been started and output is visible here. 2025-06-11 15:18:06.873384 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-11 15:18:06.873568 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-11 15:18:06.873597 | orchestrator | 2025-06-11 15:17:08 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-11' 2025-06-11 15:18:06.873621 | orchestrator | 2025-06-11 15:17:08 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250611.qcow2: 200 2025-06-11 15:18:06.873642 | orchestrator | 2025-06-11 15:17:08 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-11 2025-06-11 15:18:06.873671 | orchestrator | 2025-06-11 15:17:08 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250611.qcow2 2025-06-11 15:18:06.873726 | orchestrator | 2025-06-11 15:17:09 | INFO  | Waiting for image to leave queued state... 2025-06-11 15:18:06.873749 | orchestrator | 2025-06-11 15:17:11 | INFO  | Waiting for import to complete... 2025-06-11 15:18:06.873769 | orchestrator | 2025-06-11 15:17:21 | INFO  | Waiting for import to complete... 2025-06-11 15:18:06.873788 | orchestrator | 2025-06-11 15:17:31 | INFO  | Waiting for import to complete... 2025-06-11 15:18:06.873801 | orchestrator | 2025-06-11 15:17:41 | INFO  | Waiting for import to complete... 2025-06-11 15:18:06.873812 | orchestrator | 2025-06-11 15:17:51 | INFO  | Waiting for import to complete... 2025-06-11 15:18:06.873835 | orchestrator | 2025-06-11 15:18:01 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-11' successfully completed, reloading images 2025-06-11 15:18:06.873847 | orchestrator | 2025-06-11 15:18:02 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-11' 2025-06-11 15:18:06.873858 | orchestrator | 2025-06-11 15:18:02 | INFO  | Setting internal_version = 2025-06-11 2025-06-11 15:18:06.873869 | orchestrator | 2025-06-11 15:18:02 | INFO  | Setting image_original_user = ubuntu 2025-06-11 15:18:06.873879 | orchestrator | 2025-06-11 15:18:02 | INFO  | Adding tag amphora 2025-06-11 15:18:06.873890 | orchestrator | 2025-06-11 15:18:02 | INFO  | Adding tag os:ubuntu 2025-06-11 15:18:06.873902 | orchestrator | 2025-06-11 15:18:02 | INFO  | Setting property architecture: x86_64 2025-06-11 15:18:06.873915 | orchestrator | 2025-06-11 15:18:03 | INFO  | Setting property hw_disk_bus: scsi 2025-06-11 15:18:06.873927 | orchestrator | 2025-06-11 15:18:03 | INFO  | Setting property hw_rng_model: virtio 2025-06-11 15:18:06.873940 | orchestrator | 2025-06-11 15:18:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-11 15:18:06.873952 | orchestrator | 2025-06-11 15:18:03 | INFO  | Setting property hw_watchdog_action: reset 2025-06-11 15:18:06.873964 | orchestrator | 2025-06-11 15:18:04 | INFO  | Setting property hypervisor_type: qemu 2025-06-11 15:18:06.873977 | orchestrator | 2025-06-11 15:18:04 | INFO  | Setting property os_distro: ubuntu 2025-06-11 15:18:06.873990 | orchestrator | 2025-06-11 15:18:04 | INFO  | Setting property replace_frequency: quarterly 2025-06-11 15:18:06.874002 | orchestrator | 2025-06-11 15:18:04 | INFO  | Setting property uuid_validity: last-1 2025-06-11 15:18:06.874077 | orchestrator | 2025-06-11 15:18:04 | INFO  | Setting property provided_until: none 2025-06-11 15:18:06.874104 | orchestrator | 2025-06-11 15:18:05 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-11 15:18:06.874129 | orchestrator | 2025-06-11 15:18:05 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-11 15:18:06.874146 | orchestrator | 2025-06-11 15:18:05 | INFO  | Setting property internal_version: 2025-06-11 2025-06-11 15:18:06.874165 | orchestrator | 2025-06-11 15:18:05 | INFO  | Setting property image_original_user: ubuntu 2025-06-11 15:18:06.874183 | orchestrator | 2025-06-11 15:18:05 | INFO  | Setting property os_version: 2025-06-11 2025-06-11 15:18:06.874201 | orchestrator | 2025-06-11 15:18:06 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250611.qcow2 2025-06-11 15:18:06.874244 | orchestrator | 2025-06-11 15:18:06 | INFO  | Setting property image_build_date: 2025-06-11 2025-06-11 15:18:06.874265 | orchestrator | 2025-06-11 15:18:06 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-11' 2025-06-11 15:18:06.874299 | orchestrator | 2025-06-11 15:18:06 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-11' 2025-06-11 15:18:06.874319 | orchestrator | 2025-06-11 15:18:06 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-11 15:18:06.874338 | orchestrator | 2025-06-11 15:18:06 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-11 15:18:06.874358 | orchestrator | 2025-06-11 15:18:06 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-11 15:18:06.874373 | orchestrator | 2025-06-11 15:18:06 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-11 15:18:07.571724 | orchestrator | ok: Runtime: 0:03:01.902487 2025-06-11 15:18:07.639117 | 2025-06-11 15:18:07.639247 | TASK [Run checks] 2025-06-11 15:18:08.316459 | orchestrator | + set -e 2025-06-11 15:18:08.316731 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 15:18:08.316756 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 15:18:08.316778 | orchestrator | ++ INTERACTIVE=false 2025-06-11 15:18:08.316792 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 15:18:08.316805 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 15:18:08.316819 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-11 15:18:08.317763 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-11 15:18:08.324773 | orchestrator | 2025-06-11 15:18:08.324901 | orchestrator | # CHECK 2025-06-11 15:18:08.324920 | orchestrator | 2025-06-11 15:18:08.324934 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 15:18:08.324951 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 15:18:08.324963 | orchestrator | + echo 2025-06-11 15:18:08.324974 | orchestrator | + echo '# CHECK' 2025-06-11 15:18:08.324985 | orchestrator | + echo 2025-06-11 15:18:08.325001 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-11 15:18:08.325784 | orchestrator | ++ semver latest 5.0.0 2025-06-11 15:18:08.399104 | orchestrator | 2025-06-11 15:18:08.399264 | orchestrator | ## Containers @ testbed-manager 2025-06-11 15:18:08.399329 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-11 15:18:08.399341 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 15:18:08.399413 | orchestrator | + echo 2025-06-11 15:18:08.399438 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-11 15:18:08.399450 | orchestrator | 2025-06-11 15:18:08.399461 | orchestrator | + echo 2025-06-11 15:18:08.399472 | orchestrator | + osism container testbed-manager ps 2025-06-11 15:18:10.459635 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-11 15:18:10.459823 | orchestrator | 7960075f31b4 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-06-11 15:18:10.459868 | orchestrator | 5632b5bb3152 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-06-11 15:18:10.459897 | orchestrator | 639db4d32464 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-11 15:18:10.459913 | orchestrator | 07ecb47abebf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-11 15:18:10.459930 | orchestrator | b48638c8cbac registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-06-11 15:18:10.459955 | orchestrator | edf253fbfee8 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-06-11 15:18:10.459967 | orchestrator | c40ccbbbc011 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-11 15:18:10.459977 | orchestrator | 3f9ab52b2537 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-11 15:18:10.459988 | orchestrator | e68f3e0d3c44 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-11 15:18:10.460025 | orchestrator | 0156cbed623b phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-06-11 15:18:10.460036 | orchestrator | e8e65cbe7a6d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-06-11 15:18:10.460047 | orchestrator | 32df6807984b registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-06-11 15:18:10.460057 | orchestrator | a5df3d6ce38e registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 39 minutes ago Up 39 minutes (healthy) osism-ansible 2025-06-11 15:18:10.460067 | orchestrator | f1d4542f5500 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-11 15:18:10.460082 | orchestrator | 2d8d84341176 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-06-11 15:18:10.460112 | orchestrator | ded3a3989354 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-06-11 15:18:10.460123 | orchestrator | daa20b648b04 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-06-11 15:18:10.460133 | orchestrator | de49ecc967d4 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-06-11 15:18:10.460143 | orchestrator | 5941548dfdb5 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-11 15:18:10.460152 | orchestrator | a45e2c6210b4 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 39 minutes (healthy) osismclient 2025-06-11 15:18:10.460162 | orchestrator | 328aeabb718c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-11 15:18:10.460172 | orchestrator | a3c80769f3b2 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-11 15:18:10.460181 | orchestrator | 9fce904728e0 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-06-11 15:18:10.460191 | orchestrator | ce9a4224eaa3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-06-11 15:18:10.460208 | orchestrator | 0a65dc18750a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-06-11 15:18:10.460218 | orchestrator | 7fee9e046190 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-06-11 15:18:10.460228 | orchestrator | 515518ac4bfa registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-06-11 15:18:10.460238 | orchestrator | 670186a6c4d1 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 58 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-11 15:18:10.687276 | orchestrator | 2025-06-11 15:18:10.687396 | orchestrator | ## Images @ testbed-manager 2025-06-11 15:18:10.687421 | orchestrator | 2025-06-11 15:18:10.687443 | orchestrator | + echo 2025-06-11 15:18:10.687463 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-11 15:18:10.687483 | orchestrator | + echo 2025-06-11 15:18:10.687501 | orchestrator | + osism container testbed-manager images 2025-06-11 15:18:12.691817 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-11 15:18:12.692817 | orchestrator | registry.osism.tech/osism/osism-ansible latest ecb09e90b948 46 minutes ago 578MB 2025-06-11 15:18:12.692871 | orchestrator | registry.osism.tech/osism/osism-ansible 3d1aae8c8bc9 About an hour ago 578MB 2025-06-11 15:18:12.692886 | orchestrator | registry.osism.tech/osism/osism latest 2ede27250b78 3 hours ago 298MB 2025-06-11 15:18:12.692899 | orchestrator | registry.osism.tech/osism/homer v25.05.2 17c31370e24a 12 hours ago 11.5MB 2025-06-11 15:18:12.692912 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e8e26f2eeda6 12 hours ago 226MB 2025-06-11 15:18:12.692925 | orchestrator | registry.osism.tech/osism/cephclient reef 955d7e54b6bb 12 hours ago 453MB 2025-06-11 15:18:12.692937 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 01000079e97a 14 hours ago 747MB 2025-06-11 15:18:12.692949 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 575ee1444c7f 14 hours ago 629MB 2025-06-11 15:18:12.692983 | orchestrator | registry.osism.tech/kolla/cron 2024.2 32aba4855114 14 hours ago 319MB 2025-06-11 15:18:12.692995 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 d8847513aa6e 14 hours ago 361MB 2025-06-11 15:18:12.693006 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 6d4dfea488bb 14 hours ago 411MB 2025-06-11 15:18:12.693017 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ead50075475c 14 hours ago 359MB 2025-06-11 15:18:12.693027 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 ead4a26e5ce5 14 hours ago 457MB 2025-06-11 15:18:12.693038 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8a22c39cf15e 14 hours ago 892MB 2025-06-11 15:18:12.693049 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest bba835a99b89 15 hours ago 1.21GB 2025-06-11 15:18:12.693060 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 950855236bcf 15 hours ago 575MB 2025-06-11 15:18:12.693070 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5d9350533d8b 15 hours ago 539MB 2025-06-11 15:18:12.693103 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest bcec94d5cf97 15 hours ago 310MB 2025-06-11 15:18:12.693115 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 13 days ago 41.4MB 2025-06-11 15:18:12.693125 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 2 weeks ago 224MB 2025-06-11 15:18:12.693136 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 3 months ago 328MB 2025-06-11 15:18:12.693147 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-11 15:18:12.693157 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-11 15:18:12.693168 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-11 15:18:12.934063 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-11 15:18:12.935067 | orchestrator | ++ semver latest 5.0.0 2025-06-11 15:18:13.005046 | orchestrator | 2025-06-11 15:18:13.005143 | orchestrator | ## Containers @ testbed-node-0 2025-06-11 15:18:13.005159 | orchestrator | 2025-06-11 15:18:13.005170 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-11 15:18:13.005182 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 15:18:13.005193 | orchestrator | + echo 2025-06-11 15:18:13.005204 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-11 15:18:13.005216 | orchestrator | + echo 2025-06-11 15:18:13.005227 | orchestrator | + osism container testbed-node-0 ps 2025-06-11 15:18:15.250192 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-11 15:18:15.250395 | orchestrator | 436346d8a7be registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-11 15:18:15.250420 | orchestrator | 31ac270212cb registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-11 15:18:15.250433 | orchestrator | c235e40990a3 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-11 15:18:15.250444 | orchestrator | 27615cd17acc registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-11 15:18:15.250455 | orchestrator | b832a99eca8b registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-11 15:18:15.250466 | orchestrator | b2a6e99f2abe registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-11 15:18:15.250477 | orchestrator | 24cc0f66059f registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2025-06-11 15:18:15.250489 | orchestrator | e5bed087b652 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-11 15:18:15.250500 | orchestrator | 2f9eeb9c0faf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-11 15:18:15.250511 | orchestrator | 4d95f705fc10 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-11 15:18:15.250581 | orchestrator | 02f28518f7d0 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-06-11 15:18:15.250616 | orchestrator | cd6d3701a501 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-11 15:18:15.250627 | orchestrator | a70d8dba44af registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-11 15:18:15.250640 | orchestrator | ad8bbbfa0af2 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-11 15:18:15.250664 | orchestrator | 9ec68f6c9a9f registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-11 15:18:15.250676 | orchestrator | 0570aadd714f registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-11 15:18:15.250687 | orchestrator | 67b503bd6881 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-11 15:18:15.250698 | orchestrator | a89e57bc13b4 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-11 15:18:15.250709 | orchestrator | 8a569861895f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-11 15:18:15.250720 | orchestrator | 746d538eadcd registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-11 15:18:15.250731 | orchestrator | 0827df8f514d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-11 15:18:15.250764 | orchestrator | c617b0e88d60 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-11 15:18:15.250776 | orchestrator | d9d4c24aece0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-11 15:18:15.250787 | orchestrator | d70f10dc31ae registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-11 15:18:15.250797 | orchestrator | c65e521b8294 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-11 15:18:15.250808 | orchestrator | e67935bff6e9 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-11 15:18:15.250819 | orchestrator | 571a3b43be77 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-06-11 15:18:15.250835 | orchestrator | 88372bfd49a0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-11 15:18:15.250846 | orchestrator | d98d9bc9bf24 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-11 15:18:15.250857 | orchestrator | 96c39fba2dda registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-11 15:18:15.250876 | orchestrator | 63a4df2d6c23 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-11 15:18:15.250893 | orchestrator | 814f259e8126 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-11 15:18:15.250913 | orchestrator | 00dc9b6176d0 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-11 15:18:15.250926 | orchestrator | 0d1d3a00b0f6 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-11 15:18:15.250937 | orchestrator | 30884efb3a3a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-11 15:18:15.250948 | orchestrator | 1492946b8bad registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-11 15:18:15.250959 | orchestrator | c6d4e1fef8a1 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-11 15:18:15.250970 | orchestrator | 52c7cd9162b2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-11 15:18:15.250980 | orchestrator | a188e05de4bf registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-11 15:18:15.250991 | orchestrator | 45a04c45240f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-11 15:18:15.251002 | orchestrator | 681135a441a6 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-11 15:18:15.251012 | orchestrator | f97fc4084bc1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-11 15:18:15.251023 | orchestrator | 74b478c5def2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-06-11 15:18:15.251035 | orchestrator | 6a4a7d010f28 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-11 15:18:15.251060 | orchestrator | 7750da795f62 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-11 15:18:15.251072 | orchestrator | 200ebc31b5ba registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-11 15:18:15.251083 | orchestrator | f849ffcee901 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-11 15:18:15.251094 | orchestrator | d3ada01dfdd4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-11 15:18:15.251104 | orchestrator | bacc692db85b registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-11 15:18:15.251115 | orchestrator | e2c38ce1ac89 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-11 15:18:15.251133 | orchestrator | 4f69bb38df0d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-11 15:18:15.251144 | orchestrator | f774d0d209d8 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-11 15:18:15.530819 | orchestrator | 2025-06-11 15:18:15.530924 | orchestrator | ## Images @ testbed-node-0 2025-06-11 15:18:15.530940 | orchestrator | 2025-06-11 15:18:15.530953 | orchestrator | + echo 2025-06-11 15:18:15.530965 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-11 15:18:15.530977 | orchestrator | + echo 2025-06-11 15:18:15.530989 | orchestrator | + osism container testbed-node-0 images 2025-06-11 15:18:17.718909 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-11 15:18:17.719089 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b42ad68c3b78 12 hours ago 1.27GB 2025-06-11 15:18:17.719861 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 01000079e97a 14 hours ago 747MB 2025-06-11 15:18:17.719882 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b14ba26f0e4e 14 hours ago 1.01GB 2025-06-11 15:18:17.719894 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 18d2f477b2f0 14 hours ago 327MB 2025-06-11 15:18:17.719905 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 74d548b94824 14 hours ago 1.59GB 2025-06-11 15:18:17.719916 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 6391117a24e6 14 hours ago 1.55GB 2025-06-11 15:18:17.719928 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 575ee1444c7f 14 hours ago 629MB 2025-06-11 15:18:17.719938 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e219643eaabb 14 hours ago 319MB 2025-06-11 15:18:17.719949 | orchestrator | registry.osism.tech/kolla/cron 2024.2 32aba4855114 14 hours ago 319MB 2025-06-11 15:18:17.719964 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 314ea026567f 14 hours ago 419MB 2025-06-11 15:18:17.719984 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 eb3b06ca5291 14 hours ago 376MB 2025-06-11 15:18:17.720003 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a3d32c91e7c7 14 hours ago 330MB 2025-06-11 15:18:17.720020 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5e014cc2f7f6 14 hours ago 1.21GB 2025-06-11 15:18:17.720037 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c67fde0d3c8e 14 hours ago 354MB 2025-06-11 15:18:17.720055 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 6d4dfea488bb 14 hours ago 411MB 2025-06-11 15:18:17.720073 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ead50075475c 14 hours ago 359MB 2025-06-11 15:18:17.720090 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5070e9a178c2 14 hours ago 345MB 2025-06-11 15:18:17.720108 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 e657d39ae7a8 14 hours ago 352MB 2025-06-11 15:18:17.720126 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 98111cf78769 14 hours ago 362MB 2025-06-11 15:18:17.720146 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 78374099d25a 14 hours ago 362MB 2025-06-11 15:18:17.720165 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 575d6245f394 14 hours ago 591MB 2025-06-11 15:18:17.720185 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9f21f4aef781 14 hours ago 325MB 2025-06-11 15:18:17.720196 | orchestrator | registry.osism.tech/kolla/redis 2024.2 15a55b3cd11c 14 hours ago 326MB 2025-06-11 15:18:17.720207 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bf0ee8f970e 14 hours ago 1.31GB 2025-06-11 15:18:17.720241 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b0eb13735227 14 hours ago 1.2GB 2025-06-11 15:18:17.720253 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ba204a519225 14 hours ago 1.25GB 2025-06-11 15:18:17.720263 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 419a06680a6d 14 hours ago 1.15GB 2025-06-11 15:18:17.720274 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1bda73fa37ed 14 hours ago 1.05GB 2025-06-11 15:18:17.720284 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d34caa2f311b 14 hours ago 1.05GB 2025-06-11 15:18:17.720295 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 58735f6e64b8 14 hours ago 1.06GB 2025-06-11 15:18:17.720310 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d18fd928befe 14 hours ago 1.06GB 2025-06-11 15:18:17.720328 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4c0203ea0578 14 hours ago 1.05GB 2025-06-11 15:18:17.720356 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 fdc5a0c5e423 14 hours ago 1.05GB 2025-06-11 15:18:17.720374 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7d521af20385 14 hours ago 1.11GB 2025-06-11 15:18:17.720390 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 57c262878d82 14 hours ago 1.13GB 2025-06-11 15:18:17.720425 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 65dc7ce82199 14 hours ago 1.11GB 2025-06-11 15:18:17.720478 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 8140c11b1c48 14 hours ago 1.42GB 2025-06-11 15:18:17.720507 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09f402463b09 14 hours ago 1.3GB 2025-06-11 15:18:17.720554 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e7ab3ee35f80 14 hours ago 1.29GB 2025-06-11 15:18:17.720572 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 cbbc5909217c 14 hours ago 1.29GB 2025-06-11 15:18:17.720589 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 b6a2f3b2d105 14 hours ago 1.04GB 2025-06-11 15:18:17.720606 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 ca1984d87b9c 14 hours ago 1.04GB 2025-06-11 15:18:17.720622 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 9e94fd66317d 14 hours ago 1.04GB 2025-06-11 15:18:17.720639 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 83e0a797c649 14 hours ago 1.41GB 2025-06-11 15:18:17.720657 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c1f0e9e687a 14 hours ago 1.41GB 2025-06-11 15:18:17.720674 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 d6520bda780b 14 hours ago 1.12GB 2025-06-11 15:18:17.720692 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 66b0efd02ed8 14 hours ago 1.11GB 2025-06-11 15:18:17.720710 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 cc9448ff278d 14 hours ago 1.12GB 2025-06-11 15:18:17.720730 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 848b6e585bcb 14 hours ago 1.1GB 2025-06-11 15:18:17.720744 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 b85aa6f22039 14 hours ago 1.1GB 2025-06-11 15:18:17.720755 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 73403acd2ccf 14 hours ago 1.12GB 2025-06-11 15:18:17.720766 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 e10cacc1e32e 14 hours ago 1.1GB 2025-06-11 15:18:17.720777 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3153ef1e934b 14 hours ago 1.06GB 2025-06-11 15:18:17.720787 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 da0dcf86797b 14 hours ago 1.06GB 2025-06-11 15:18:17.720812 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 b51e63e40170 14 hours ago 1.06GB 2025-06-11 15:18:17.720822 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 53becce962e1 14 hours ago 1.04GB 2025-06-11 15:18:17.720833 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 a8c96d1782ed 14 hours ago 1.04GB 2025-06-11 15:18:17.720844 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 24a450688917 14 hours ago 1.04GB 2025-06-11 15:18:17.720854 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 54e934bc8433 14 hours ago 1.04GB 2025-06-11 15:18:17.720865 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bc6b6dbde7b2 14 hours ago 947MB 2025-06-11 15:18:17.720875 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 73298e3be2dc 14 hours ago 948MB 2025-06-11 15:18:17.720886 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d4ca5571b7eb 14 hours ago 948MB 2025-06-11 15:18:17.720896 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5b89fa9a2abc 14 hours ago 947MB 2025-06-11 15:18:17.992072 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-11 15:18:17.992169 | orchestrator | ++ semver latest 5.0.0 2025-06-11 15:18:18.035737 | orchestrator | 2025-06-11 15:18:18.035810 | orchestrator | ## Containers @ testbed-node-1 2025-06-11 15:18:18.035824 | orchestrator | 2025-06-11 15:18:18.035835 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-11 15:18:18.035870 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 15:18:18.035882 | orchestrator | + echo 2025-06-11 15:18:18.035894 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-11 15:18:18.035905 | orchestrator | + echo 2025-06-11 15:18:18.035916 | orchestrator | + osism container testbed-node-1 ps 2025-06-11 15:18:20.210595 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-11 15:18:20.210703 | orchestrator | 3ce1ce555afc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-11 15:18:20.210720 | orchestrator | b7aa31cc9536 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-11 15:18:20.210732 | orchestrator | 6068fd7f3096 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-11 15:18:20.210744 | orchestrator | ccd68dd0f096 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-11 15:18:20.210755 | orchestrator | 066579cf71dc registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-06-11 15:18:20.210766 | orchestrator | b50f75049d07 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-11 15:18:20.210777 | orchestrator | 2b875b44a6bc registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-11 15:18:20.210792 | orchestrator | bab2daa90115 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-11 15:18:20.210811 | orchestrator | b51ec3657cd7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-11 15:18:20.210829 | orchestrator | 89170f4c6b72 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-11 15:18:20.210876 | orchestrator | 2c5c05144c9e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-06-11 15:18:20.210896 | orchestrator | cca9e6302cf4 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-11 15:18:20.210915 | orchestrator | fd2d02d37ed1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-11 15:18:20.210926 | orchestrator | a7ef1d89daa1 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-11 15:18:20.210938 | orchestrator | 9d8f4a68168e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-11 15:18:20.210949 | orchestrator | 1448f5bcb899 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-11 15:18:20.210960 | orchestrator | 8b861fb6aad5 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-11 15:18:20.210988 | orchestrator | 1009e8a093da registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-11 15:18:20.210999 | orchestrator | 494e7fcaf77a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-11 15:18:20.211015 | orchestrator | 3b9541ded15c registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-11 15:18:20.211026 | orchestrator | 5e148e5a5600 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-11 15:18:20.211058 | orchestrator | 8c619a18432b registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-11 15:18:20.211069 | orchestrator | 7c257186651a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-11 15:18:20.211080 | orchestrator | 118186801fbb registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-11 15:18:20.211092 | orchestrator | 649a36f8715c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-11 15:18:20.211102 | orchestrator | d2059d52b189 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-11 15:18:20.211113 | orchestrator | 362c0224586a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-06-11 15:18:20.211124 | orchestrator | 193074624af0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-11 15:18:20.211134 | orchestrator | a6e746b0cdcd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-11 15:18:20.211153 | orchestrator | 59e96723820d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-11 15:18:20.211164 | orchestrator | b7b26648e7a3 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-11 15:18:20.211175 | orchestrator | 63c3e1b0c72b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-11 15:18:20.211185 | orchestrator | 1412b8b40c6c registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-11 15:18:20.211196 | orchestrator | b11876f45df1 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-11 15:18:20.211206 | orchestrator | 0948261c4fe7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-11 15:18:20.211217 | orchestrator | f94a4596e700 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-11 15:18:20.211227 | orchestrator | 689689e25ef3 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-11 15:18:20.211238 | orchestrator | fda7cbab36e4 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-11 15:18:20.211249 | orchestrator | f9b93c9678ec registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-11 15:18:20.211265 | orchestrator | 075a571e3448 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-11 15:18:20.211277 | orchestrator | f8bf84e974c3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-11 15:18:20.211287 | orchestrator | 641ea2563649 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-11 15:18:20.211298 | orchestrator | d0b4f89a408c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-11 15:18:20.211309 | orchestrator | 60e7022a5e06 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-11 15:18:20.211328 | orchestrator | 20aa778961c6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-11 15:18:20.211339 | orchestrator | aa7e6f78d096 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-11 15:18:20.211350 | orchestrator | f31cde3471bf registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-11 15:18:20.211361 | orchestrator | 7c127b8a0d65 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-11 15:18:20.211372 | orchestrator | 1d03ac08cde5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-11 15:18:20.211391 | orchestrator | 9e27e10064d9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-11 15:18:20.211402 | orchestrator | d82ce9d1d5ce registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-11 15:18:20.211413 | orchestrator | b0555b5b68f7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-11 15:18:20.454445 | orchestrator | 2025-06-11 15:18:20.454582 | orchestrator | ## Images @ testbed-node-1 2025-06-11 15:18:20.454598 | orchestrator | 2025-06-11 15:18:20.454609 | orchestrator | + echo 2025-06-11 15:18:20.454620 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-11 15:18:20.454631 | orchestrator | + echo 2025-06-11 15:18:20.454641 | orchestrator | + osism container testbed-node-1 images 2025-06-11 15:18:22.581280 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-11 15:18:22.581386 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b42ad68c3b78 12 hours ago 1.27GB 2025-06-11 15:18:22.581400 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 01000079e97a 14 hours ago 747MB 2025-06-11 15:18:22.581412 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b14ba26f0e4e 14 hours ago 1.01GB 2025-06-11 15:18:22.581423 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 18d2f477b2f0 14 hours ago 327MB 2025-06-11 15:18:22.581433 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 74d548b94824 14 hours ago 1.59GB 2025-06-11 15:18:22.581444 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 6391117a24e6 14 hours ago 1.55GB 2025-06-11 15:18:22.581454 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 575ee1444c7f 14 hours ago 629MB 2025-06-11 15:18:22.581465 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e219643eaabb 14 hours ago 319MB 2025-06-11 15:18:22.581475 | orchestrator | registry.osism.tech/kolla/cron 2024.2 32aba4855114 14 hours ago 319MB 2025-06-11 15:18:22.581486 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 314ea026567f 14 hours ago 419MB 2025-06-11 15:18:22.581496 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 eb3b06ca5291 14 hours ago 376MB 2025-06-11 15:18:22.581507 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a3d32c91e7c7 14 hours ago 330MB 2025-06-11 15:18:22.581517 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5e014cc2f7f6 14 hours ago 1.21GB 2025-06-11 15:18:22.581570 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c67fde0d3c8e 14 hours ago 354MB 2025-06-11 15:18:22.581582 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 6d4dfea488bb 14 hours ago 411MB 2025-06-11 15:18:22.581593 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ead50075475c 14 hours ago 359MB 2025-06-11 15:18:22.581604 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5070e9a178c2 14 hours ago 345MB 2025-06-11 15:18:22.581615 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 e657d39ae7a8 14 hours ago 352MB 2025-06-11 15:18:22.581626 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 98111cf78769 14 hours ago 362MB 2025-06-11 15:18:22.581656 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 78374099d25a 14 hours ago 362MB 2025-06-11 15:18:22.581667 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 575d6245f394 14 hours ago 591MB 2025-06-11 15:18:22.581678 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9f21f4aef781 14 hours ago 325MB 2025-06-11 15:18:22.581714 | orchestrator | registry.osism.tech/kolla/redis 2024.2 15a55b3cd11c 14 hours ago 326MB 2025-06-11 15:18:22.581725 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bf0ee8f970e 14 hours ago 1.31GB 2025-06-11 15:18:22.581735 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b0eb13735227 14 hours ago 1.2GB 2025-06-11 15:18:22.581746 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ba204a519225 14 hours ago 1.25GB 2025-06-11 15:18:22.581756 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 419a06680a6d 14 hours ago 1.15GB 2025-06-11 15:18:22.581766 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1bda73fa37ed 14 hours ago 1.05GB 2025-06-11 15:18:22.581777 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d34caa2f311b 14 hours ago 1.05GB 2025-06-11 15:18:22.581787 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 58735f6e64b8 14 hours ago 1.06GB 2025-06-11 15:18:22.581797 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d18fd928befe 14 hours ago 1.06GB 2025-06-11 15:18:22.581809 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4c0203ea0578 14 hours ago 1.05GB 2025-06-11 15:18:22.581820 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 fdc5a0c5e423 14 hours ago 1.05GB 2025-06-11 15:18:22.581832 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7d521af20385 14 hours ago 1.11GB 2025-06-11 15:18:22.581845 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 57c262878d82 14 hours ago 1.13GB 2025-06-11 15:18:22.581856 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 65dc7ce82199 14 hours ago 1.11GB 2025-06-11 15:18:22.581886 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 8140c11b1c48 14 hours ago 1.42GB 2025-06-11 15:18:22.581899 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09f402463b09 14 hours ago 1.3GB 2025-06-11 15:18:22.581911 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e7ab3ee35f80 14 hours ago 1.29GB 2025-06-11 15:18:22.581923 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 cbbc5909217c 14 hours ago 1.29GB 2025-06-11 15:18:22.581935 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 9e94fd66317d 14 hours ago 1.04GB 2025-06-11 15:18:22.581947 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 83e0a797c649 14 hours ago 1.41GB 2025-06-11 15:18:22.581958 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c1f0e9e687a 14 hours ago 1.41GB 2025-06-11 15:18:22.581970 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3153ef1e934b 14 hours ago 1.06GB 2025-06-11 15:18:22.581982 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 da0dcf86797b 14 hours ago 1.06GB 2025-06-11 15:18:22.581994 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 b51e63e40170 14 hours ago 1.06GB 2025-06-11 15:18:22.582006 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bc6b6dbde7b2 14 hours ago 947MB 2025-06-11 15:18:22.582113 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 73298e3be2dc 14 hours ago 948MB 2025-06-11 15:18:22.582128 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d4ca5571b7eb 14 hours ago 948MB 2025-06-11 15:18:22.582141 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5b89fa9a2abc 14 hours ago 947MB 2025-06-11 15:18:22.826870 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-11 15:18:22.827112 | orchestrator | ++ semver latest 5.0.0 2025-06-11 15:18:22.892611 | orchestrator | 2025-06-11 15:18:22.892738 | orchestrator | ## Containers @ testbed-node-2 2025-06-11 15:18:22.892754 | orchestrator | 2025-06-11 15:18:22.892766 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-11 15:18:22.892778 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 15:18:22.892789 | orchestrator | + echo 2025-06-11 15:18:22.892801 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-11 15:18:22.892813 | orchestrator | + echo 2025-06-11 15:18:22.892824 | orchestrator | + osism container testbed-node-2 ps 2025-06-11 15:18:25.090443 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-11 15:18:25.090596 | orchestrator | 2ae4af961a29 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-11 15:18:25.090614 | orchestrator | fd3eff1c7fc5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-11 15:18:25.090627 | orchestrator | b9eb3d1ad98a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-11 15:18:25.090638 | orchestrator | c40fed9fecc9 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-11 15:18:25.090649 | orchestrator | b37fbdbb336b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-06-11 15:18:25.090681 | orchestrator | d4dfb0c68287 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-11 15:18:25.090693 | orchestrator | b21e0ba00aba registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-11 15:18:25.090723 | orchestrator | af5b17270ac8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-11 15:18:25.090735 | orchestrator | 6e005cb2e2ad registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-11 15:18:25.090746 | orchestrator | daa38c2f8671 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-11 15:18:25.090757 | orchestrator | 04bcd78e0a77 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-06-11 15:18:25.090768 | orchestrator | 7d7782e9aa6a registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-11 15:18:25.090778 | orchestrator | 6e84680d11b5 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-11 15:18:25.090789 | orchestrator | af969263c376 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-11 15:18:25.090806 | orchestrator | cab4a1d92ebd registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-11 15:18:25.090817 | orchestrator | f2c64c074635 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-11 15:18:25.090828 | orchestrator | d08924d890c5 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-11 15:18:25.090857 | orchestrator | ac9f46f8bb4a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-11 15:18:25.090869 | orchestrator | fc4c720cab2c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-11 15:18:25.090879 | orchestrator | a7a24cc845a7 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-11 15:18:25.090890 | orchestrator | cac7b7e0ab74 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-06-11 15:18:25.090920 | orchestrator | 8e1cd799726b registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-11 15:18:25.090931 | orchestrator | 3091b79b3a54 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-11 15:18:25.090942 | orchestrator | 626999d29946 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-11 15:18:25.090953 | orchestrator | 9c91c2e13e21 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-11 15:18:25.090964 | orchestrator | 847eb7ea3581 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-11 15:18:25.090975 | orchestrator | e235f14b5267 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-11 15:18:25.090987 | orchestrator | e3cbc7428a26 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-11 15:18:25.091000 | orchestrator | e956d58ad92b registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-11 15:18:25.091013 | orchestrator | cfec7d9698a6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-11 15:18:25.091025 | orchestrator | 73e01ef76d62 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-11 15:18:25.091037 | orchestrator | 36e926f6d1b2 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-11 15:18:25.091049 | orchestrator | 4a88b2f9f466 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-11 15:18:25.091062 | orchestrator | bdad236eb422 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-11 15:18:25.091074 | orchestrator | 8a1d1921aa41 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-06-11 15:18:25.091087 | orchestrator | 429bf6374c8b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-11 15:18:25.091099 | orchestrator | f9797d161cc6 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-11 15:18:25.091118 | orchestrator | 543fdd80b6eb registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-11 15:18:25.091132 | orchestrator | 2ca79293ede0 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-11 15:18:25.091150 | orchestrator | 363084ad83d9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-11 15:18:25.091163 | orchestrator | c2254b97b6cf registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-11 15:18:25.091175 | orchestrator | a3f64cda40fa registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-11 15:18:25.091188 | orchestrator | 46a40d069c32 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-11 15:18:25.091200 | orchestrator | 9ac2a718e124 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-11 15:18:25.091220 | orchestrator | 2c30e22f80dc registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-11 15:18:25.091234 | orchestrator | 822dbf264813 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-11 15:18:25.091246 | orchestrator | 7e2be831c319 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-11 15:18:25.091258 | orchestrator | 535cb931fbbb registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-11 15:18:25.091270 | orchestrator | b9501f0379b2 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-11 15:18:25.091283 | orchestrator | 7c0ea941a6b6 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-11 15:18:25.091296 | orchestrator | 4f0faf7218ea registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-11 15:18:25.091309 | orchestrator | caae01ece373 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-11 15:18:25.420335 | orchestrator | 2025-06-11 15:18:25.420433 | orchestrator | ## Images @ testbed-node-2 2025-06-11 15:18:25.420448 | orchestrator | 2025-06-11 15:18:25.420461 | orchestrator | + echo 2025-06-11 15:18:25.420473 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-11 15:18:25.420485 | orchestrator | + echo 2025-06-11 15:18:25.420496 | orchestrator | + osism container testbed-node-2 images 2025-06-11 15:18:27.622824 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-11 15:18:27.622949 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b42ad68c3b78 12 hours ago 1.27GB 2025-06-11 15:18:27.622966 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 01000079e97a 14 hours ago 747MB 2025-06-11 15:18:27.622978 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b14ba26f0e4e 14 hours ago 1.01GB 2025-06-11 15:18:27.623012 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 18d2f477b2f0 14 hours ago 327MB 2025-06-11 15:18:27.623023 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 74d548b94824 14 hours ago 1.59GB 2025-06-11 15:18:27.623034 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 6391117a24e6 14 hours ago 1.55GB 2025-06-11 15:18:27.623045 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 575ee1444c7f 14 hours ago 629MB 2025-06-11 15:18:27.623055 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e219643eaabb 14 hours ago 319MB 2025-06-11 15:18:27.623066 | orchestrator | registry.osism.tech/kolla/cron 2024.2 32aba4855114 14 hours ago 319MB 2025-06-11 15:18:27.623077 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 314ea026567f 14 hours ago 419MB 2025-06-11 15:18:27.623088 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 eb3b06ca5291 14 hours ago 376MB 2025-06-11 15:18:27.623100 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a3d32c91e7c7 14 hours ago 330MB 2025-06-11 15:18:27.623110 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5e014cc2f7f6 14 hours ago 1.21GB 2025-06-11 15:18:27.623121 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c67fde0d3c8e 14 hours ago 354MB 2025-06-11 15:18:27.623131 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 6d4dfea488bb 14 hours ago 411MB 2025-06-11 15:18:27.623142 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ead50075475c 14 hours ago 359MB 2025-06-11 15:18:27.623153 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 5070e9a178c2 14 hours ago 345MB 2025-06-11 15:18:27.623163 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 e657d39ae7a8 14 hours ago 352MB 2025-06-11 15:18:27.623174 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 98111cf78769 14 hours ago 362MB 2025-06-11 15:18:27.623184 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 78374099d25a 14 hours ago 362MB 2025-06-11 15:18:27.623195 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 575d6245f394 14 hours ago 591MB 2025-06-11 15:18:27.623206 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9f21f4aef781 14 hours ago 325MB 2025-06-11 15:18:27.623216 | orchestrator | registry.osism.tech/kolla/redis 2024.2 15a55b3cd11c 14 hours ago 326MB 2025-06-11 15:18:27.623227 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bf0ee8f970e 14 hours ago 1.31GB 2025-06-11 15:18:27.623237 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b0eb13735227 14 hours ago 1.2GB 2025-06-11 15:18:27.623248 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ba204a519225 14 hours ago 1.25GB 2025-06-11 15:18:27.623258 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 419a06680a6d 14 hours ago 1.15GB 2025-06-11 15:18:27.623269 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1bda73fa37ed 14 hours ago 1.05GB 2025-06-11 15:18:27.623280 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d34caa2f311b 14 hours ago 1.05GB 2025-06-11 15:18:27.623290 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 58735f6e64b8 14 hours ago 1.06GB 2025-06-11 15:18:27.623300 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d18fd928befe 14 hours ago 1.06GB 2025-06-11 15:18:27.623311 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4c0203ea0578 14 hours ago 1.05GB 2025-06-11 15:18:27.623322 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 fdc5a0c5e423 14 hours ago 1.05GB 2025-06-11 15:18:27.623339 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7d521af20385 14 hours ago 1.11GB 2025-06-11 15:18:27.623350 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 57c262878d82 14 hours ago 1.13GB 2025-06-11 15:18:27.623361 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 65dc7ce82199 14 hours ago 1.11GB 2025-06-11 15:18:27.623406 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 8140c11b1c48 14 hours ago 1.42GB 2025-06-11 15:18:27.623420 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09f402463b09 14 hours ago 1.3GB 2025-06-11 15:18:27.623433 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 e7ab3ee35f80 14 hours ago 1.29GB 2025-06-11 15:18:27.623445 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 cbbc5909217c 14 hours ago 1.29GB 2025-06-11 15:18:27.623456 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 9e94fd66317d 14 hours ago 1.04GB 2025-06-11 15:18:27.623468 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 83e0a797c649 14 hours ago 1.41GB 2025-06-11 15:18:27.623479 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 5c1f0e9e687a 14 hours ago 1.41GB 2025-06-11 15:18:27.623491 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3153ef1e934b 14 hours ago 1.06GB 2025-06-11 15:18:27.623502 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 da0dcf86797b 14 hours ago 1.06GB 2025-06-11 15:18:27.623513 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 b51e63e40170 14 hours ago 1.06GB 2025-06-11 15:18:27.623525 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bc6b6dbde7b2 14 hours ago 947MB 2025-06-11 15:18:27.623564 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 73298e3be2dc 14 hours ago 948MB 2025-06-11 15:18:27.623577 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d4ca5571b7eb 14 hours ago 948MB 2025-06-11 15:18:27.623596 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5b89fa9a2abc 14 hours ago 947MB 2025-06-11 15:18:27.909944 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-11 15:18:27.919927 | orchestrator | + set -e 2025-06-11 15:18:27.920002 | orchestrator | + source /opt/manager-vars.sh 2025-06-11 15:18:27.921451 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-11 15:18:27.921488 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-11 15:18:27.921500 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-11 15:18:27.921510 | orchestrator | ++ CEPH_VERSION=reef 2025-06-11 15:18:27.921522 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-11 15:18:27.921534 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-11 15:18:27.921569 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 15:18:27.921581 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 15:18:27.921591 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-11 15:18:27.921636 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-11 15:18:27.921647 | orchestrator | ++ export ARA=false 2025-06-11 15:18:27.921659 | orchestrator | ++ ARA=false 2025-06-11 15:18:27.921670 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-11 15:18:27.921681 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-11 15:18:27.921692 | orchestrator | ++ export TEMPEST=false 2025-06-11 15:18:27.921702 | orchestrator | ++ TEMPEST=false 2025-06-11 15:18:27.921713 | orchestrator | ++ export IS_ZUUL=true 2025-06-11 15:18:27.921723 | orchestrator | ++ IS_ZUUL=true 2025-06-11 15:18:27.921734 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 15:18:27.921745 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 15:18:27.921846 | orchestrator | ++ export EXTERNAL_API=false 2025-06-11 15:18:27.921861 | orchestrator | ++ EXTERNAL_API=false 2025-06-11 15:18:27.921872 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-11 15:18:27.921882 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-11 15:18:27.921893 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-11 15:18:27.921904 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-11 15:18:27.921914 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-11 15:18:27.921949 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-11 15:18:27.921961 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-11 15:18:27.921972 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-11 15:18:27.929719 | orchestrator | + set -e 2025-06-11 15:18:27.929769 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 15:18:27.929782 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 15:18:27.929794 | orchestrator | ++ INTERACTIVE=false 2025-06-11 15:18:27.929806 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 15:18:27.929817 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 15:18:27.929829 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-11 15:18:27.930477 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-11 15:18:27.937875 | orchestrator | 2025-06-11 15:18:27.937934 | orchestrator | # Ceph status 2025-06-11 15:18:27.937952 | orchestrator | 2025-06-11 15:18:27.937968 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 15:18:27.937981 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 15:18:27.937993 | orchestrator | + echo 2025-06-11 15:18:27.938004 | orchestrator | + echo '# Ceph status' 2025-06-11 15:18:27.938060 | orchestrator | + echo 2025-06-11 15:18:27.938072 | orchestrator | + ceph -s 2025-06-11 15:18:28.554371 | orchestrator | cluster: 2025-06-11 15:18:28.554475 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-11 15:18:28.554491 | orchestrator | health: HEALTH_OK 2025-06-11 15:18:28.554504 | orchestrator | 2025-06-11 15:18:28.554516 | orchestrator | services: 2025-06-11 15:18:28.554528 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-11 15:18:28.554602 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-06-11 15:18:28.554618 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-11 15:18:28.554629 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-06-11 15:18:28.554640 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-11 15:18:28.554651 | orchestrator | 2025-06-11 15:18:28.554662 | orchestrator | data: 2025-06-11 15:18:28.554673 | orchestrator | volumes: 1/1 healthy 2025-06-11 15:18:28.554684 | orchestrator | pools: 14 pools, 401 pgs 2025-06-11 15:18:28.554695 | orchestrator | objects: 523 objects, 2.2 GiB 2025-06-11 15:18:28.554706 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-11 15:18:28.554717 | orchestrator | pgs: 401 active+clean 2025-06-11 15:18:28.554728 | orchestrator | 2025-06-11 15:18:28.603636 | orchestrator | 2025-06-11 15:18:28.603732 | orchestrator | # Ceph versions 2025-06-11 15:18:28.603747 | orchestrator | 2025-06-11 15:18:28.603759 | orchestrator | + echo 2025-06-11 15:18:28.603771 | orchestrator | + echo '# Ceph versions' 2025-06-11 15:18:28.603783 | orchestrator | + echo 2025-06-11 15:18:28.603794 | orchestrator | + ceph versions 2025-06-11 15:18:29.205767 | orchestrator | { 2025-06-11 15:18:29.205870 | orchestrator | "mon": { 2025-06-11 15:18:29.205886 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-11 15:18:29.205900 | orchestrator | }, 2025-06-11 15:18:29.205911 | orchestrator | "mgr": { 2025-06-11 15:18:29.205923 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-11 15:18:29.205934 | orchestrator | }, 2025-06-11 15:18:29.205945 | orchestrator | "osd": { 2025-06-11 15:18:29.205957 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-11 15:18:29.205968 | orchestrator | }, 2025-06-11 15:18:29.205979 | orchestrator | "mds": { 2025-06-11 15:18:29.205990 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-11 15:18:29.206001 | orchestrator | }, 2025-06-11 15:18:29.206012 | orchestrator | "rgw": { 2025-06-11 15:18:29.206086 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-11 15:18:29.206097 | orchestrator | }, 2025-06-11 15:18:29.206108 | orchestrator | "overall": { 2025-06-11 15:18:29.206120 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-11 15:18:29.206131 | orchestrator | } 2025-06-11 15:18:29.206143 | orchestrator | } 2025-06-11 15:18:29.266957 | orchestrator | 2025-06-11 15:18:29.267050 | orchestrator | # Ceph OSD tree 2025-06-11 15:18:29.267064 | orchestrator | 2025-06-11 15:18:29.267076 | orchestrator | + echo 2025-06-11 15:18:29.267088 | orchestrator | + echo '# Ceph OSD tree' 2025-06-11 15:18:29.267099 | orchestrator | + echo 2025-06-11 15:18:29.267110 | orchestrator | + ceph osd df tree 2025-06-11 15:18:29.780677 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-11 15:18:29.780824 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-06-11 15:18:29.780840 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-06-11 15:18:29.780851 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.88 1.16 201 up osd.0 2025-06-11 15:18:29.780862 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1008 MiB 939 MiB 1 KiB 70 MiB 19 GiB 4.93 0.83 189 up osd.5 2025-06-11 15:18:29.780873 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-11 15:18:29.780883 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.24 1.05 184 up osd.1 2025-06-11 15:18:29.780894 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.60 0.95 204 up osd.3 2025-06-11 15:18:29.780905 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-11 15:18:29.780916 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.26 1.23 203 up osd.2 2025-06-11 15:18:29.780927 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 936 MiB 867 MiB 1 KiB 70 MiB 19 GiB 4.58 0.77 189 up osd.4 2025-06-11 15:18:29.780937 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-06-11 15:18:29.780949 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 0.98 2025-06-11 15:18:29.826460 | orchestrator | 2025-06-11 15:18:29.826581 | orchestrator | # Ceph monitor status 2025-06-11 15:18:29.826596 | orchestrator | 2025-06-11 15:18:29.826608 | orchestrator | + echo 2025-06-11 15:18:29.826619 | orchestrator | + echo '# Ceph monitor status' 2025-06-11 15:18:29.826631 | orchestrator | + echo 2025-06-11 15:18:29.826642 | orchestrator | + ceph mon stat 2025-06-11 15:18:30.399590 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-11 15:18:30.455576 | orchestrator | 2025-06-11 15:18:30.455646 | orchestrator | # Ceph quorum status 2025-06-11 15:18:30.455667 | orchestrator | 2025-06-11 15:18:30.455686 | orchestrator | + echo 2025-06-11 15:18:30.455706 | orchestrator | + echo '# Ceph quorum status' 2025-06-11 15:18:30.455726 | orchestrator | + echo 2025-06-11 15:18:30.455803 | orchestrator | + ceph quorum_status 2025-06-11 15:18:30.455815 | orchestrator | + jq 2025-06-11 15:18:31.077242 | orchestrator | { 2025-06-11 15:18:31.077317 | orchestrator | "election_epoch": 8, 2025-06-11 15:18:31.077324 | orchestrator | "quorum": [ 2025-06-11 15:18:31.077329 | orchestrator | 0, 2025-06-11 15:18:31.077333 | orchestrator | 1, 2025-06-11 15:18:31.077336 | orchestrator | 2 2025-06-11 15:18:31.077340 | orchestrator | ], 2025-06-11 15:18:31.077345 | orchestrator | "quorum_names": [ 2025-06-11 15:18:31.077349 | orchestrator | "testbed-node-0", 2025-06-11 15:18:31.077353 | orchestrator | "testbed-node-1", 2025-06-11 15:18:31.077357 | orchestrator | "testbed-node-2" 2025-06-11 15:18:31.077361 | orchestrator | ], 2025-06-11 15:18:31.077365 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-11 15:18:31.077370 | orchestrator | "quorum_age": 1729, 2025-06-11 15:18:31.077374 | orchestrator | "features": { 2025-06-11 15:18:31.077378 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-11 15:18:31.077381 | orchestrator | "quorum_mon": [ 2025-06-11 15:18:31.077385 | orchestrator | "kraken", 2025-06-11 15:18:31.077389 | orchestrator | "luminous", 2025-06-11 15:18:31.077393 | orchestrator | "mimic", 2025-06-11 15:18:31.077397 | orchestrator | "osdmap-prune", 2025-06-11 15:18:31.077400 | orchestrator | "nautilus", 2025-06-11 15:18:31.077404 | orchestrator | "octopus", 2025-06-11 15:18:31.077408 | orchestrator | "pacific", 2025-06-11 15:18:31.077428 | orchestrator | "elector-pinging", 2025-06-11 15:18:31.077433 | orchestrator | "quincy", 2025-06-11 15:18:31.077436 | orchestrator | "reef" 2025-06-11 15:18:31.077440 | orchestrator | ] 2025-06-11 15:18:31.077444 | orchestrator | }, 2025-06-11 15:18:31.077447 | orchestrator | "monmap": { 2025-06-11 15:18:31.077451 | orchestrator | "epoch": 1, 2025-06-11 15:18:31.077455 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-11 15:18:31.077459 | orchestrator | "modified": "2025-06-11T14:49:24.984519Z", 2025-06-11 15:18:31.077463 | orchestrator | "created": "2025-06-11T14:49:24.984519Z", 2025-06-11 15:18:31.077467 | orchestrator | "min_mon_release": 18, 2025-06-11 15:18:31.077471 | orchestrator | "min_mon_release_name": "reef", 2025-06-11 15:18:31.077486 | orchestrator | "election_strategy": 1, 2025-06-11 15:18:31.077490 | orchestrator | "disallowed_leaders: ": "", 2025-06-11 15:18:31.077494 | orchestrator | "stretch_mode": false, 2025-06-11 15:18:31.077498 | orchestrator | "tiebreaker_mon": "", 2025-06-11 15:18:31.077502 | orchestrator | "removed_ranks: ": "", 2025-06-11 15:18:31.077505 | orchestrator | "features": { 2025-06-11 15:18:31.077509 | orchestrator | "persistent": [ 2025-06-11 15:18:31.077513 | orchestrator | "kraken", 2025-06-11 15:18:31.077516 | orchestrator | "luminous", 2025-06-11 15:18:31.077520 | orchestrator | "mimic", 2025-06-11 15:18:31.077524 | orchestrator | "osdmap-prune", 2025-06-11 15:18:31.077527 | orchestrator | "nautilus", 2025-06-11 15:18:31.077531 | orchestrator | "octopus", 2025-06-11 15:18:31.077535 | orchestrator | "pacific", 2025-06-11 15:18:31.077538 | orchestrator | "elector-pinging", 2025-06-11 15:18:31.077542 | orchestrator | "quincy", 2025-06-11 15:18:31.077599 | orchestrator | "reef" 2025-06-11 15:18:31.077603 | orchestrator | ], 2025-06-11 15:18:31.077607 | orchestrator | "optional": [] 2025-06-11 15:18:31.077611 | orchestrator | }, 2025-06-11 15:18:31.077614 | orchestrator | "mons": [ 2025-06-11 15:18:31.077618 | orchestrator | { 2025-06-11 15:18:31.077622 | orchestrator | "rank": 0, 2025-06-11 15:18:31.077626 | orchestrator | "name": "testbed-node-0", 2025-06-11 15:18:31.077629 | orchestrator | "public_addrs": { 2025-06-11 15:18:31.077633 | orchestrator | "addrvec": [ 2025-06-11 15:18:31.077637 | orchestrator | { 2025-06-11 15:18:31.077640 | orchestrator | "type": "v2", 2025-06-11 15:18:31.077644 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-11 15:18:31.077648 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077652 | orchestrator | }, 2025-06-11 15:18:31.077655 | orchestrator | { 2025-06-11 15:18:31.077659 | orchestrator | "type": "v1", 2025-06-11 15:18:31.077663 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-11 15:18:31.077667 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077670 | orchestrator | } 2025-06-11 15:18:31.077674 | orchestrator | ] 2025-06-11 15:18:31.077678 | orchestrator | }, 2025-06-11 15:18:31.077681 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-11 15:18:31.077685 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-11 15:18:31.077689 | orchestrator | "priority": 0, 2025-06-11 15:18:31.077693 | orchestrator | "weight": 0, 2025-06-11 15:18:31.077696 | orchestrator | "crush_location": "{}" 2025-06-11 15:18:31.077700 | orchestrator | }, 2025-06-11 15:18:31.077703 | orchestrator | { 2025-06-11 15:18:31.077787 | orchestrator | "rank": 1, 2025-06-11 15:18:31.077792 | orchestrator | "name": "testbed-node-1", 2025-06-11 15:18:31.077795 | orchestrator | "public_addrs": { 2025-06-11 15:18:31.077799 | orchestrator | "addrvec": [ 2025-06-11 15:18:31.077803 | orchestrator | { 2025-06-11 15:18:31.077806 | orchestrator | "type": "v2", 2025-06-11 15:18:31.077810 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-11 15:18:31.077814 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077817 | orchestrator | }, 2025-06-11 15:18:31.077821 | orchestrator | { 2025-06-11 15:18:31.077825 | orchestrator | "type": "v1", 2025-06-11 15:18:31.077829 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-11 15:18:31.077832 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077836 | orchestrator | } 2025-06-11 15:18:31.077840 | orchestrator | ] 2025-06-11 15:18:31.077844 | orchestrator | }, 2025-06-11 15:18:31.077847 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-11 15:18:31.077851 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-11 15:18:31.077855 | orchestrator | "priority": 0, 2025-06-11 15:18:31.077859 | orchestrator | "weight": 0, 2025-06-11 15:18:31.077862 | orchestrator | "crush_location": "{}" 2025-06-11 15:18:31.077872 | orchestrator | }, 2025-06-11 15:18:31.077875 | orchestrator | { 2025-06-11 15:18:31.077879 | orchestrator | "rank": 2, 2025-06-11 15:18:31.077883 | orchestrator | "name": "testbed-node-2", 2025-06-11 15:18:31.077887 | orchestrator | "public_addrs": { 2025-06-11 15:18:31.077890 | orchestrator | "addrvec": [ 2025-06-11 15:18:31.077894 | orchestrator | { 2025-06-11 15:18:31.077898 | orchestrator | "type": "v2", 2025-06-11 15:18:31.077901 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-11 15:18:31.077905 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077909 | orchestrator | }, 2025-06-11 15:18:31.077912 | orchestrator | { 2025-06-11 15:18:31.077916 | orchestrator | "type": "v1", 2025-06-11 15:18:31.077920 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-11 15:18:31.077923 | orchestrator | "nonce": 0 2025-06-11 15:18:31.077927 | orchestrator | } 2025-06-11 15:18:31.077931 | orchestrator | ] 2025-06-11 15:18:31.077934 | orchestrator | }, 2025-06-11 15:18:31.077938 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-11 15:18:31.077942 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-11 15:18:31.077946 | orchestrator | "priority": 0, 2025-06-11 15:18:31.077949 | orchestrator | "weight": 0, 2025-06-11 15:18:31.077953 | orchestrator | "crush_location": "{}" 2025-06-11 15:18:31.077957 | orchestrator | } 2025-06-11 15:18:31.077960 | orchestrator | ] 2025-06-11 15:18:31.077964 | orchestrator | } 2025-06-11 15:18:31.077968 | orchestrator | } 2025-06-11 15:18:31.077979 | orchestrator | 2025-06-11 15:18:31.077984 | orchestrator | # Ceph free space status 2025-06-11 15:18:31.077987 | orchestrator | 2025-06-11 15:18:31.077991 | orchestrator | + echo 2025-06-11 15:18:31.077995 | orchestrator | + echo '# Ceph free space status' 2025-06-11 15:18:31.077999 | orchestrator | + echo 2025-06-11 15:18:31.078002 | orchestrator | + ceph df 2025-06-11 15:18:31.659719 | orchestrator | --- RAW STORAGE --- 2025-06-11 15:18:31.659794 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-11 15:18:31.659810 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-11 15:18:31.659816 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-11 15:18:31.659820 | orchestrator | 2025-06-11 15:18:31.659825 | orchestrator | --- POOLS --- 2025-06-11 15:18:31.659839 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-11 15:18:31.659845 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-11 15:18:31.659849 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-11 15:18:31.659853 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-11 15:18:31.659857 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-11 15:18:31.659861 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-11 15:18:31.659865 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-11 15:18:31.659869 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-11 15:18:31.659873 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-11 15:18:31.659877 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2025-06-11 15:18:31.659881 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-11 15:18:31.659885 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-11 15:18:31.659889 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-06-11 15:18:31.659893 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-11 15:18:31.659897 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-11 15:18:31.703441 | orchestrator | ++ semver latest 5.0.0 2025-06-11 15:18:31.764406 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-11 15:18:31.764495 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-11 15:18:31.764511 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-11 15:18:31.764525 | orchestrator | + osism apply facts 2025-06-11 15:18:33.447298 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:18:33.447414 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:18:33.447431 | orchestrator | Registering Redlock._release_script 2025-06-11 15:18:33.511062 | orchestrator | 2025-06-11 15:18:33 | INFO  | Task 125205ce-045c-43aa-a1d9-68ccea08fbfa (facts) was prepared for execution. 2025-06-11 15:18:33.511164 | orchestrator | 2025-06-11 15:18:33 | INFO  | It takes a moment until task 125205ce-045c-43aa-a1d9-68ccea08fbfa (facts) has been started and output is visible here. 2025-06-11 15:18:46.404115 | orchestrator | 2025-06-11 15:18:46.404244 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-11 15:18:46.404261 | orchestrator | 2025-06-11 15:18:46.404274 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-11 15:18:46.404326 | orchestrator | Wednesday 11 June 2025 15:18:37 +0000 (0:00:00.212) 0:00:00.212 ******** 2025-06-11 15:18:46.404339 | orchestrator | ok: [testbed-manager] 2025-06-11 15:18:46.404352 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:18:46.404363 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:18:46.404374 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:18:46.404385 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:18:46.404395 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:18:46.404406 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:18:46.404417 | orchestrator | 2025-06-11 15:18:46.404428 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-11 15:18:46.404439 | orchestrator | Wednesday 11 June 2025 15:18:38 +0000 (0:00:01.355) 0:00:01.567 ******** 2025-06-11 15:18:46.404450 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:18:46.404462 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:18:46.404473 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:18:46.404483 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:18:46.404494 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:18:46.404505 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:18:46.404516 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:18:46.404526 | orchestrator | 2025-06-11 15:18:46.404537 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-11 15:18:46.404548 | orchestrator | 2025-06-11 15:18:46.404559 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-11 15:18:46.404595 | orchestrator | Wednesday 11 June 2025 15:18:39 +0000 (0:00:01.149) 0:00:02.716 ******** 2025-06-11 15:18:46.404608 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:18:46.404618 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:18:46.404629 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:18:46.404639 | orchestrator | ok: [testbed-manager] 2025-06-11 15:18:46.404649 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:18:46.404662 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:18:46.404674 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:18:46.404686 | orchestrator | 2025-06-11 15:18:46.404698 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-11 15:18:46.404710 | orchestrator | 2025-06-11 15:18:46.404723 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-11 15:18:46.404735 | orchestrator | Wednesday 11 June 2025 15:18:45 +0000 (0:00:05.239) 0:00:07.955 ******** 2025-06-11 15:18:46.404747 | orchestrator | skipping: [testbed-manager] 2025-06-11 15:18:46.404759 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:18:46.404771 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:18:46.404782 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:18:46.404794 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:18:46.404806 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:18:46.404817 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:18:46.404829 | orchestrator | 2025-06-11 15:18:46.404841 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:18:46.404854 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404867 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404902 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404915 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404927 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404939 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404951 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:18:46.404963 | orchestrator | 2025-06-11 15:18:46.404975 | orchestrator | 2025-06-11 15:18:46.404987 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:18:46.405049 | orchestrator | Wednesday 11 June 2025 15:18:45 +0000 (0:00:00.688) 0:00:08.644 ******** 2025-06-11 15:18:46.405061 | orchestrator | =============================================================================== 2025-06-11 15:18:46.405071 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.24s 2025-06-11 15:18:46.405082 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-06-11 15:18:46.405093 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2025-06-11 15:18:46.405104 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.69s 2025-06-11 15:18:46.641872 | orchestrator | + osism validate ceph-mons 2025-06-11 15:18:48.415328 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:18:48.415429 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:18:48.415444 | orchestrator | Registering Redlock._release_script 2025-06-11 15:19:07.544092 | orchestrator | 2025-06-11 15:19:07.544208 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-11 15:19:07.544225 | orchestrator | 2025-06-11 15:19:07.544238 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-11 15:19:07.544268 | orchestrator | Wednesday 11 June 2025 15:18:52 +0000 (0:00:00.436) 0:00:00.436 ******** 2025-06-11 15:19:07.544280 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.544291 | orchestrator | 2025-06-11 15:19:07.544303 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-11 15:19:07.544314 | orchestrator | Wednesday 11 June 2025 15:18:53 +0000 (0:00:00.518) 0:00:00.954 ******** 2025-06-11 15:19:07.544325 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.544336 | orchestrator | 2025-06-11 15:19:07.544347 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-11 15:19:07.544358 | orchestrator | Wednesday 11 June 2025 15:18:53 +0000 (0:00:00.683) 0:00:01.638 ******** 2025-06-11 15:19:07.544369 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544382 | orchestrator | 2025-06-11 15:19:07.544394 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-11 15:19:07.544405 | orchestrator | Wednesday 11 June 2025 15:18:54 +0000 (0:00:00.175) 0:00:01.813 ******** 2025-06-11 15:19:07.544415 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544427 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:07.544437 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:07.544448 | orchestrator | 2025-06-11 15:19:07.544459 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-11 15:19:07.544470 | orchestrator | Wednesday 11 June 2025 15:18:54 +0000 (0:00:00.264) 0:00:02.078 ******** 2025-06-11 15:19:07.544481 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:07.544492 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:07.544503 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544514 | orchestrator | 2025-06-11 15:19:07.544525 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-11 15:19:07.544557 | orchestrator | Wednesday 11 June 2025 15:18:55 +0000 (0:00:00.923) 0:00:03.001 ******** 2025-06-11 15:19:07.544569 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.544580 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:19:07.544591 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:19:07.544602 | orchestrator | 2025-06-11 15:19:07.544657 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-11 15:19:07.544671 | orchestrator | Wednesday 11 June 2025 15:18:55 +0000 (0:00:00.259) 0:00:03.261 ******** 2025-06-11 15:19:07.544684 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544696 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:07.544708 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:07.544720 | orchestrator | 2025-06-11 15:19:07.544732 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:07.544745 | orchestrator | Wednesday 11 June 2025 15:18:55 +0000 (0:00:00.409) 0:00:03.670 ******** 2025-06-11 15:19:07.544758 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544770 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:07.544782 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:07.544794 | orchestrator | 2025-06-11 15:19:07.544806 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-11 15:19:07.544819 | orchestrator | Wednesday 11 June 2025 15:18:56 +0000 (0:00:00.283) 0:00:03.953 ******** 2025-06-11 15:19:07.544831 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.544843 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:19:07.544855 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:19:07.544868 | orchestrator | 2025-06-11 15:19:07.544880 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-11 15:19:07.544892 | orchestrator | Wednesday 11 June 2025 15:18:56 +0000 (0:00:00.274) 0:00:04.228 ******** 2025-06-11 15:19:07.544905 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.544917 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:07.544929 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:07.544941 | orchestrator | 2025-06-11 15:19:07.544953 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:19:07.544966 | orchestrator | Wednesday 11 June 2025 15:18:56 +0000 (0:00:00.307) 0:00:04.536 ******** 2025-06-11 15:19:07.544984 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.544995 | orchestrator | 2025-06-11 15:19:07.545006 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:19:07.545017 | orchestrator | Wednesday 11 June 2025 15:18:57 +0000 (0:00:00.665) 0:00:05.201 ******** 2025-06-11 15:19:07.545028 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545038 | orchestrator | 2025-06-11 15:19:07.545049 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:19:07.545059 | orchestrator | Wednesday 11 June 2025 15:18:57 +0000 (0:00:00.253) 0:00:05.454 ******** 2025-06-11 15:19:07.545070 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545081 | orchestrator | 2025-06-11 15:19:07.545091 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:07.545102 | orchestrator | Wednesday 11 June 2025 15:18:57 +0000 (0:00:00.252) 0:00:05.706 ******** 2025-06-11 15:19:07.545112 | orchestrator | 2025-06-11 15:19:07.545123 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:07.545134 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.086) 0:00:05.793 ******** 2025-06-11 15:19:07.545144 | orchestrator | 2025-06-11 15:19:07.545154 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:07.545165 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.070) 0:00:05.863 ******** 2025-06-11 15:19:07.545176 | orchestrator | 2025-06-11 15:19:07.545186 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:19:07.545197 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.075) 0:00:05.939 ******** 2025-06-11 15:19:07.545207 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545226 | orchestrator | 2025-06-11 15:19:07.545237 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-11 15:19:07.545248 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.255) 0:00:06.194 ******** 2025-06-11 15:19:07.545259 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545270 | orchestrator | 2025-06-11 15:19:07.545298 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-11 15:19:07.545310 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.243) 0:00:06.437 ******** 2025-06-11 15:19:07.545320 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545331 | orchestrator | 2025-06-11 15:19:07.545342 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-11 15:19:07.545352 | orchestrator | Wednesday 11 June 2025 15:18:58 +0000 (0:00:00.116) 0:00:06.553 ******** 2025-06-11 15:19:07.545363 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:19:07.545374 | orchestrator | 2025-06-11 15:19:07.545384 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-11 15:19:07.545395 | orchestrator | Wednesday 11 June 2025 15:19:00 +0000 (0:00:01.620) 0:00:08.174 ******** 2025-06-11 15:19:07.545405 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545416 | orchestrator | 2025-06-11 15:19:07.545427 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-11 15:19:07.545437 | orchestrator | Wednesday 11 June 2025 15:19:00 +0000 (0:00:00.328) 0:00:08.503 ******** 2025-06-11 15:19:07.545448 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545458 | orchestrator | 2025-06-11 15:19:07.545469 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-11 15:19:07.545480 | orchestrator | Wednesday 11 June 2025 15:19:01 +0000 (0:00:00.344) 0:00:08.848 ******** 2025-06-11 15:19:07.545490 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545501 | orchestrator | 2025-06-11 15:19:07.545511 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-11 15:19:07.545522 | orchestrator | Wednesday 11 June 2025 15:19:01 +0000 (0:00:00.328) 0:00:09.176 ******** 2025-06-11 15:19:07.545533 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545543 | orchestrator | 2025-06-11 15:19:07.545554 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-11 15:19:07.545564 | orchestrator | Wednesday 11 June 2025 15:19:01 +0000 (0:00:00.341) 0:00:09.518 ******** 2025-06-11 15:19:07.545575 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545585 | orchestrator | 2025-06-11 15:19:07.545596 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-11 15:19:07.545628 | orchestrator | Wednesday 11 June 2025 15:19:01 +0000 (0:00:00.127) 0:00:09.645 ******** 2025-06-11 15:19:07.545640 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545651 | orchestrator | 2025-06-11 15:19:07.545661 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-11 15:19:07.545672 | orchestrator | Wednesday 11 June 2025 15:19:02 +0000 (0:00:00.159) 0:00:09.805 ******** 2025-06-11 15:19:07.545682 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545693 | orchestrator | 2025-06-11 15:19:07.545704 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-11 15:19:07.545714 | orchestrator | Wednesday 11 June 2025 15:19:02 +0000 (0:00:00.136) 0:00:09.941 ******** 2025-06-11 15:19:07.545725 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:19:07.545735 | orchestrator | 2025-06-11 15:19:07.545746 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-11 15:19:07.545756 | orchestrator | Wednesday 11 June 2025 15:19:03 +0000 (0:00:01.340) 0:00:11.281 ******** 2025-06-11 15:19:07.545767 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545777 | orchestrator | 2025-06-11 15:19:07.545788 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-11 15:19:07.545798 | orchestrator | Wednesday 11 June 2025 15:19:03 +0000 (0:00:00.291) 0:00:11.573 ******** 2025-06-11 15:19:07.545809 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545826 | orchestrator | 2025-06-11 15:19:07.545837 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-11 15:19:07.545847 | orchestrator | Wednesday 11 June 2025 15:19:03 +0000 (0:00:00.166) 0:00:11.739 ******** 2025-06-11 15:19:07.545858 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:07.545869 | orchestrator | 2025-06-11 15:19:07.545879 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-11 15:19:07.545890 | orchestrator | Wednesday 11 June 2025 15:19:04 +0000 (0:00:00.143) 0:00:11.883 ******** 2025-06-11 15:19:07.545900 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545911 | orchestrator | 2025-06-11 15:19:07.545922 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-11 15:19:07.545932 | orchestrator | Wednesday 11 June 2025 15:19:04 +0000 (0:00:00.139) 0:00:12.022 ******** 2025-06-11 15:19:07.545942 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.545953 | orchestrator | 2025-06-11 15:19:07.545964 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-11 15:19:07.545974 | orchestrator | Wednesday 11 June 2025 15:19:04 +0000 (0:00:00.349) 0:00:12.372 ******** 2025-06-11 15:19:07.545985 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.545996 | orchestrator | 2025-06-11 15:19:07.546006 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-11 15:19:07.546069 | orchestrator | Wednesday 11 June 2025 15:19:04 +0000 (0:00:00.243) 0:00:12.615 ******** 2025-06-11 15:19:07.546081 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:07.546092 | orchestrator | 2025-06-11 15:19:07.546102 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:19:07.546113 | orchestrator | Wednesday 11 June 2025 15:19:05 +0000 (0:00:00.257) 0:00:12.873 ******** 2025-06-11 15:19:07.546124 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.546134 | orchestrator | 2025-06-11 15:19:07.546145 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:19:07.546156 | orchestrator | Wednesday 11 June 2025 15:19:06 +0000 (0:00:01.601) 0:00:14.475 ******** 2025-06-11 15:19:07.546166 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.546177 | orchestrator | 2025-06-11 15:19:07.546188 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:19:07.546198 | orchestrator | Wednesday 11 June 2025 15:19:06 +0000 (0:00:00.266) 0:00:14.741 ******** 2025-06-11 15:19:07.546209 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:07.546220 | orchestrator | 2025-06-11 15:19:07.546238 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:09.755281 | orchestrator | Wednesday 11 June 2025 15:19:07 +0000 (0:00:00.289) 0:00:15.030 ******** 2025-06-11 15:19:09.755393 | orchestrator | 2025-06-11 15:19:09.755408 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:09.755421 | orchestrator | Wednesday 11 June 2025 15:19:07 +0000 (0:00:00.073) 0:00:15.104 ******** 2025-06-11 15:19:09.755431 | orchestrator | 2025-06-11 15:19:09.755443 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:09.755454 | orchestrator | Wednesday 11 June 2025 15:19:07 +0000 (0:00:00.111) 0:00:15.215 ******** 2025-06-11 15:19:09.755465 | orchestrator | 2025-06-11 15:19:09.755475 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-11 15:19:09.755486 | orchestrator | Wednesday 11 June 2025 15:19:07 +0000 (0:00:00.087) 0:00:15.303 ******** 2025-06-11 15:19:09.755497 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:09.755508 | orchestrator | 2025-06-11 15:19:09.755519 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:19:09.755530 | orchestrator | Wednesday 11 June 2025 15:19:08 +0000 (0:00:01.351) 0:00:16.655 ******** 2025-06-11 15:19:09.755540 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-11 15:19:09.755581 | orchestrator |  "msg": [ 2025-06-11 15:19:09.755593 | orchestrator |  "Validator run completed.", 2025-06-11 15:19:09.755605 | orchestrator |  "You can find the report file here:", 2025-06-11 15:19:09.755687 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-11T15:18:53+00:00-report.json", 2025-06-11 15:19:09.755701 | orchestrator |  "on the following host:", 2025-06-11 15:19:09.755716 | orchestrator |  "testbed-manager" 2025-06-11 15:19:09.755727 | orchestrator |  ] 2025-06-11 15:19:09.755739 | orchestrator | } 2025-06-11 15:19:09.755750 | orchestrator | 2025-06-11 15:19:09.755761 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:19:09.755774 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-11 15:19:09.755786 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:19:09.755798 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:19:09.755809 | orchestrator | 2025-06-11 15:19:09.755820 | orchestrator | 2025-06-11 15:19:09.755831 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:19:09.755842 | orchestrator | Wednesday 11 June 2025 15:19:09 +0000 (0:00:00.564) 0:00:17.219 ******** 2025-06-11 15:19:09.755852 | orchestrator | =============================================================================== 2025-06-11 15:19:09.755863 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.62s 2025-06-11 15:19:09.755874 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2025-06-11 15:19:09.755885 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2025-06-11 15:19:09.755896 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-06-11 15:19:09.755907 | orchestrator | Get container info ------------------------------------------------------ 0.92s 2025-06-11 15:19:09.755917 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2025-06-11 15:19:09.755928 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-06-11 15:19:09.755939 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-06-11 15:19:09.755955 | orchestrator | Get timestamp for report file ------------------------------------------- 0.52s 2025-06-11 15:19:09.755966 | orchestrator | Set test result to passed if container is existing ---------------------- 0.41s 2025-06-11 15:19:09.755977 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-06-11 15:19:09.755988 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.34s 2025-06-11 15:19:09.755999 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-06-11 15:19:09.756009 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-06-11 15:19:09.756020 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-06-11 15:19:09.756031 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-06-11 15:19:09.756042 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-06-11 15:19:09.756052 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-06-11 15:19:09.756063 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-06-11 15:19:09.756074 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.27s 2025-06-11 15:19:09.978167 | orchestrator | + osism validate ceph-mgrs 2025-06-11 15:19:11.735099 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:19:11.735258 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:19:11.735275 | orchestrator | Registering Redlock._release_script 2025-06-11 15:19:31.298201 | orchestrator | 2025-06-11 15:19:31.298300 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-11 15:19:31.298314 | orchestrator | 2025-06-11 15:19:31.298324 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-11 15:19:31.298334 | orchestrator | Wednesday 11 June 2025 15:19:16 +0000 (0:00:00.454) 0:00:00.454 ******** 2025-06-11 15:19:31.298343 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.298352 | orchestrator | 2025-06-11 15:19:31.298361 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-11 15:19:31.298370 | orchestrator | Wednesday 11 June 2025 15:19:16 +0000 (0:00:00.681) 0:00:01.136 ******** 2025-06-11 15:19:31.298378 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.298387 | orchestrator | 2025-06-11 15:19:31.298396 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-11 15:19:31.298404 | orchestrator | Wednesday 11 June 2025 15:19:17 +0000 (0:00:00.916) 0:00:02.052 ******** 2025-06-11 15:19:31.298413 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298422 | orchestrator | 2025-06-11 15:19:31.298431 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-11 15:19:31.298439 | orchestrator | Wednesday 11 June 2025 15:19:18 +0000 (0:00:00.255) 0:00:02.308 ******** 2025-06-11 15:19:31.298448 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298456 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:31.298465 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:31.298473 | orchestrator | 2025-06-11 15:19:31.298481 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-11 15:19:31.298490 | orchestrator | Wednesday 11 June 2025 15:19:18 +0000 (0:00:00.291) 0:00:02.600 ******** 2025-06-11 15:19:31.298498 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:31.298507 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298515 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:31.298523 | orchestrator | 2025-06-11 15:19:31.298532 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-11 15:19:31.298540 | orchestrator | Wednesday 11 June 2025 15:19:19 +0000 (0:00:01.010) 0:00:03.610 ******** 2025-06-11 15:19:31.298549 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.298558 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:19:31.298566 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:19:31.298574 | orchestrator | 2025-06-11 15:19:31.298583 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-11 15:19:31.298592 | orchestrator | Wednesday 11 June 2025 15:19:19 +0000 (0:00:00.284) 0:00:03.894 ******** 2025-06-11 15:19:31.298600 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298608 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:31.298617 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:31.298625 | orchestrator | 2025-06-11 15:19:31.298634 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:31.298677 | orchestrator | Wednesday 11 June 2025 15:19:20 +0000 (0:00:00.497) 0:00:04.392 ******** 2025-06-11 15:19:31.298687 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298696 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:31.298705 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:31.298713 | orchestrator | 2025-06-11 15:19:31.298722 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-11 15:19:31.298730 | orchestrator | Wednesday 11 June 2025 15:19:20 +0000 (0:00:00.325) 0:00:04.717 ******** 2025-06-11 15:19:31.298739 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.298747 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:19:31.298756 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:19:31.298764 | orchestrator | 2025-06-11 15:19:31.298773 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-11 15:19:31.298782 | orchestrator | Wednesday 11 June 2025 15:19:20 +0000 (0:00:00.290) 0:00:05.008 ******** 2025-06-11 15:19:31.298790 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.298798 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:19:31.298830 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:19:31.298839 | orchestrator | 2025-06-11 15:19:31.298847 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:19:31.298856 | orchestrator | Wednesday 11 June 2025 15:19:21 +0000 (0:00:00.314) 0:00:05.322 ******** 2025-06-11 15:19:31.298864 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.298873 | orchestrator | 2025-06-11 15:19:31.298881 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:19:31.298890 | orchestrator | Wednesday 11 June 2025 15:19:21 +0000 (0:00:00.739) 0:00:06.062 ******** 2025-06-11 15:19:31.298898 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.298907 | orchestrator | 2025-06-11 15:19:31.298928 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:19:31.298937 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.260) 0:00:06.322 ******** 2025-06-11 15:19:31.298945 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.298990 | orchestrator | 2025-06-11 15:19:31.299001 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299010 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.273) 0:00:06.596 ******** 2025-06-11 15:19:31.299018 | orchestrator | 2025-06-11 15:19:31.299027 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299035 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.071) 0:00:06.667 ******** 2025-06-11 15:19:31.299043 | orchestrator | 2025-06-11 15:19:31.299052 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299061 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.070) 0:00:06.737 ******** 2025-06-11 15:19:31.299069 | orchestrator | 2025-06-11 15:19:31.299077 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:19:31.299086 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.073) 0:00:06.810 ******** 2025-06-11 15:19:31.299094 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.299103 | orchestrator | 2025-06-11 15:19:31.299111 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-11 15:19:31.299120 | orchestrator | Wednesday 11 June 2025 15:19:22 +0000 (0:00:00.262) 0:00:07.073 ******** 2025-06-11 15:19:31.299128 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.299137 | orchestrator | 2025-06-11 15:19:31.299163 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-11 15:19:31.299172 | orchestrator | Wednesday 11 June 2025 15:19:23 +0000 (0:00:00.255) 0:00:07.329 ******** 2025-06-11 15:19:31.299181 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.299189 | orchestrator | 2025-06-11 15:19:31.299198 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-11 15:19:31.299206 | orchestrator | Wednesday 11 June 2025 15:19:23 +0000 (0:00:00.132) 0:00:07.461 ******** 2025-06-11 15:19:31.299215 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:19:31.299223 | orchestrator | 2025-06-11 15:19:31.299231 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-11 15:19:31.299240 | orchestrator | Wednesday 11 June 2025 15:19:25 +0000 (0:00:02.004) 0:00:09.465 ******** 2025-06-11 15:19:31.299248 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.299257 | orchestrator | 2025-06-11 15:19:31.299265 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-11 15:19:31.299274 | orchestrator | Wednesday 11 June 2025 15:19:25 +0000 (0:00:00.250) 0:00:09.716 ******** 2025-06-11 15:19:31.299282 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.299290 | orchestrator | 2025-06-11 15:19:31.299299 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-11 15:19:31.299307 | orchestrator | Wednesday 11 June 2025 15:19:26 +0000 (0:00:00.488) 0:00:10.204 ******** 2025-06-11 15:19:31.299316 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.299324 | orchestrator | 2025-06-11 15:19:31.299333 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-11 15:19:31.299348 | orchestrator | Wednesday 11 June 2025 15:19:26 +0000 (0:00:00.148) 0:00:10.353 ******** 2025-06-11 15:19:31.299357 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:19:31.299365 | orchestrator | 2025-06-11 15:19:31.299374 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-11 15:19:31.299382 | orchestrator | Wednesday 11 June 2025 15:19:26 +0000 (0:00:00.153) 0:00:10.506 ******** 2025-06-11 15:19:31.299390 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.299399 | orchestrator | 2025-06-11 15:19:31.299408 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-11 15:19:31.299416 | orchestrator | Wednesday 11 June 2025 15:19:26 +0000 (0:00:00.286) 0:00:10.792 ******** 2025-06-11 15:19:31.299425 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:19:31.299433 | orchestrator | 2025-06-11 15:19:31.299441 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:19:31.299450 | orchestrator | Wednesday 11 June 2025 15:19:26 +0000 (0:00:00.260) 0:00:11.052 ******** 2025-06-11 15:19:31.299458 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.299467 | orchestrator | 2025-06-11 15:19:31.299475 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:19:31.299484 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:01.298) 0:00:12.351 ******** 2025-06-11 15:19:31.299492 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.299501 | orchestrator | 2025-06-11 15:19:31.299509 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:19:31.299518 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:00.269) 0:00:12.620 ******** 2025-06-11 15:19:31.299526 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.299535 | orchestrator | 2025-06-11 15:19:31.299543 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299552 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:00.260) 0:00:12.880 ******** 2025-06-11 15:19:31.299560 | orchestrator | 2025-06-11 15:19:31.299569 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299577 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:00.068) 0:00:12.949 ******** 2025-06-11 15:19:31.299585 | orchestrator | 2025-06-11 15:19:31.299594 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:31.299602 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:00.067) 0:00:13.016 ******** 2025-06-11 15:19:31.299611 | orchestrator | 2025-06-11 15:19:31.299619 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-11 15:19:31.299628 | orchestrator | Wednesday 11 June 2025 15:19:28 +0000 (0:00:00.074) 0:00:13.090 ******** 2025-06-11 15:19:31.299636 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:31.299664 | orchestrator | 2025-06-11 15:19:31.299674 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:19:31.299682 | orchestrator | Wednesday 11 June 2025 15:19:30 +0000 (0:00:01.902) 0:00:14.993 ******** 2025-06-11 15:19:31.299691 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-11 15:19:31.299700 | orchestrator |  "msg": [ 2025-06-11 15:19:31.299709 | orchestrator |  "Validator run completed.", 2025-06-11 15:19:31.299718 | orchestrator |  "You can find the report file here:", 2025-06-11 15:19:31.299734 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-11T15:19:16+00:00-report.json", 2025-06-11 15:19:31.299744 | orchestrator |  "on the following host:", 2025-06-11 15:19:31.299752 | orchestrator |  "testbed-manager" 2025-06-11 15:19:31.299761 | orchestrator |  ] 2025-06-11 15:19:31.299770 | orchestrator | } 2025-06-11 15:19:31.299779 | orchestrator | 2025-06-11 15:19:31.299787 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:19:31.299797 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-11 15:19:31.299812 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:19:31.299828 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:19:31.615878 | orchestrator | 2025-06-11 15:19:31.615975 | orchestrator | 2025-06-11 15:19:31.616006 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:19:31.616031 | orchestrator | Wednesday 11 June 2025 15:19:31 +0000 (0:00:00.423) 0:00:15.417 ******** 2025-06-11 15:19:31.616043 | orchestrator | =============================================================================== 2025-06-11 15:19:31.616054 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2025-06-11 15:19:31.616065 | orchestrator | Write report file ------------------------------------------------------- 1.90s 2025-06-11 15:19:31.616076 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2025-06-11 15:19:31.616086 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-06-11 15:19:31.616097 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2025-06-11 15:19:31.616108 | orchestrator | Aggregate test results step one ----------------------------------------- 0.74s 2025-06-11 15:19:31.616118 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-06-11 15:19:31.616129 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-06-11 15:19:31.616140 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.49s 2025-06-11 15:19:31.616151 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-06-11 15:19:31.616161 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-06-11 15:19:31.616172 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-11 15:19:31.616183 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-06-11 15:19:31.616193 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-06-11 15:19:31.616204 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2025-06-11 15:19:31.616215 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-11 15:19:31.616226 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-06-11 15:19:31.616236 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-11 15:19:31.616247 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-06-11 15:19:31.616258 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-11 15:19:31.864320 | orchestrator | + osism validate ceph-osds 2025-06-11 15:19:33.542484 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:19:33.542605 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:19:33.542623 | orchestrator | Registering Redlock._release_script 2025-06-11 15:19:41.363397 | orchestrator | 2025-06-11 15:19:41.363490 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-11 15:19:41.363505 | orchestrator | 2025-06-11 15:19:41.363517 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-11 15:19:41.363529 | orchestrator | Wednesday 11 June 2025 15:19:37 +0000 (0:00:00.321) 0:00:00.321 ******** 2025-06-11 15:19:41.363540 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:41.363551 | orchestrator | 2025-06-11 15:19:41.363562 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-11 15:19:41.363573 | orchestrator | Wednesday 11 June 2025 15:19:38 +0000 (0:00:00.604) 0:00:00.926 ******** 2025-06-11 15:19:41.363605 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:41.363617 | orchestrator | 2025-06-11 15:19:41.363627 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-11 15:19:41.363638 | orchestrator | Wednesday 11 June 2025 15:19:38 +0000 (0:00:00.306) 0:00:01.232 ******** 2025-06-11 15:19:41.363649 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:19:41.363704 | orchestrator | 2025-06-11 15:19:41.363715 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-11 15:19:41.363726 | orchestrator | Wednesday 11 June 2025 15:19:39 +0000 (0:00:00.788) 0:00:02.020 ******** 2025-06-11 15:19:41.363737 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:41.363749 | orchestrator | 2025-06-11 15:19:41.363759 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-11 15:19:41.363781 | orchestrator | Wednesday 11 June 2025 15:19:39 +0000 (0:00:00.119) 0:00:02.140 ******** 2025-06-11 15:19:41.363792 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:41.363804 | orchestrator | 2025-06-11 15:19:41.363815 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-11 15:19:41.363826 | orchestrator | Wednesday 11 June 2025 15:19:39 +0000 (0:00:00.129) 0:00:02.269 ******** 2025-06-11 15:19:41.363836 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:41.363847 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:19:41.363857 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:19:41.363868 | orchestrator | 2025-06-11 15:19:41.363878 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-11 15:19:41.363889 | orchestrator | Wednesday 11 June 2025 15:19:39 +0000 (0:00:00.273) 0:00:02.543 ******** 2025-06-11 15:19:41.363900 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:41.363910 | orchestrator | 2025-06-11 15:19:41.363921 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-11 15:19:41.363931 | orchestrator | Wednesday 11 June 2025 15:19:39 +0000 (0:00:00.135) 0:00:02.678 ******** 2025-06-11 15:19:41.363944 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:41.363957 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:41.363969 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:41.363981 | orchestrator | 2025-06-11 15:19:41.363993 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-11 15:19:41.364005 | orchestrator | Wednesday 11 June 2025 15:19:40 +0000 (0:00:00.303) 0:00:02.982 ******** 2025-06-11 15:19:41.364018 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:41.364029 | orchestrator | 2025-06-11 15:19:41.364042 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:41.364054 | orchestrator | Wednesday 11 June 2025 15:19:40 +0000 (0:00:00.470) 0:00:03.453 ******** 2025-06-11 15:19:41.364066 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:41.364078 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:41.364090 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:41.364102 | orchestrator | 2025-06-11 15:19:41.364113 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-11 15:19:41.364125 | orchestrator | Wednesday 11 June 2025 15:19:41 +0000 (0:00:00.459) 0:00:03.912 ******** 2025-06-11 15:19:41.364139 | orchestrator | skipping: [testbed-node-3] => (item={'id': '37dad744e88f7d8e7554468d9bed04a2fc5145efcdae665f8f6759d36d46d8af', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-11 15:19:41.364155 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78a1d2416485febf992076297d557f3245deabc70ee09bc8a834d866d523faf3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-11 15:19:41.364169 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25edf40a7b11d30cc5d8d7ac0c6ece467dcac4ce66a944c96751c436dd5d1d98', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-11 15:19:41.364189 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c85be08c355722edbeb6dabb6d4f3d0736f30af3637d2741db1a08076c4e92ba', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.364200 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1b05c1263be4e0a434e0e798ed29613bb2a9513f24c18556a7a9233ab09eab4a', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.364227 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4edad7473b295eb416d603bb70a89b8ffdceeb68a2cf64ce53e852a4a32ede2f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-11 15:19:41.364246 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fab35847e4e35e31ac017ae4eaa51f082180c929fabb53aa11f57bf4464ce2de', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.364258 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e67f723320fdd08c73cce84842b4e565e89b4e6b3abcaef91e1477a951d1ec8', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.364269 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0862c737cff41174e2c1267fad0c065831e02ed887aad29045f6608d17f29b55', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-11 15:19:41.364283 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3dd3234b8257f7cf7a0f298d76584a714415a802746d9bacc705eb44b2edf04e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-11 15:19:41.364294 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c4d77626a47f2ded2f7e8a7752c2ca6fd5484d02a0408b7e4c496d52a4300c90', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-11 15:19:41.364305 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d79bac769f8fb0ef86ad915d49b25a006fcc92ab8f9709c4ec834c1a44302fc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-11 15:19:41.364316 | orchestrator | ok: [testbed-node-3] => (item={'id': '74fa152744ee0d1ba4862d50ab1c77d0af4cd9ef6f056fff0a7f12e56763ecaa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:41.364327 | orchestrator | ok: [testbed-node-3] => (item={'id': '72dcc62135e89a88b392dea884b635b53980f733f97db7465e5a7139182cbf59', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:41.364338 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e797d59dabed2c33ee83cc1e26ad9ca21ab1247b9876cea2618fa7123a2fbe8', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-11 15:19:41.364349 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0f6c3fbc833e623e73bd27a7c198592a5e166d03c9627ace85fe16854c98122', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-11 15:19:41.364360 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a59e567cd7179d2498a4182ef9e16deb6f2c202e359d126613c624f7e2cc6d37', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-11 15:19:41.364378 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f87e47334cb1dea0ba273b21ac1c50ee0ec7ab681700355e7b9c298c1f61d510', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-11 15:19:41.364389 | orchestrator | skipping: [testbed-node-3] => (item={'id': '627f611020ef6fe3be5db3a75936104e595e126c409a99fbc217629297618e62', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-11 15:19:41.364400 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb01dff39e71ab342faed5518e1462abd55670d6c91fb1484e793419c6c24a42', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-11 15:19:41.364411 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e238852b4bbdd17f240605cd502474bfd83bc96e3ba8375dc34a0ac8f8b03bcc', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-11 15:19:41.364428 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c6c3c295009f6143ccb5be9cdb181aedfccc6d273e28c95b54b66a43b3d74bb2', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-11 15:19:41.579130 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb829f68cc384e2b843ea2de6a9a64134828e41088a3de6cff57e50ee97ba7c3', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-11 15:19:41.579227 | orchestrator | skipping: [testbed-node-4] => (item={'id': '135b305c17f1f6641ad1ae0671ee2ba11ec39a22127e5f1312619f78523be953', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.579245 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82157c9ee8388c3d79aef3de7a0398f168e9f70061272776973f9ccc8df83635', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.579262 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c74f91dd993de914858af5e1b74e1a21a5f1f3e85aa638f1b6a2eefe6e458764', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-11 15:19:41.579274 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bd2acb6315703a0296509ddbc5a2e13eda39301c71dba7b6795ce49239abe5d6', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.579286 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1da55141173fe2c9ac3fa6fb74cae03d853af3ae248c9b39b500b9306cccf724', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.579297 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53a6bd30bba4c8f3cc78b11f019367b70b6b3d902960c92007663eaa3efdd6b3', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-11 15:19:41.579308 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f26b3be795619d7a8ef2926905192650f42216b10c78585cb33198d4f50deee6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-11 15:19:41.579319 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5f8518bf8f6f6c63ab035695d0c926be98b48670d1a76ba6c17a292e16f06797', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-11 15:19:41.579348 | orchestrator | skipping: [testbed-node-4] => (item={'id': '583b526e2d3116e997689f417bd4a1792de29064aa6c793805e4b1f41fbf9d0a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-11 15:19:41.579360 | orchestrator | ok: [testbed-node-4] => (item={'id': '87f7f1bccc970a01881210180f6c8e9b448d6272fd11bb1696903c781ac5b6c9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:41.579371 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ac26d89751bba2a705666dbd4ff2664689530eb19271a13fc7077185866d91e4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:41.579382 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2c4913dff899e16f8725824cd1e439c7f52bb51fcdd3e76f7cf88bc60f0cc693', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-11 15:19:41.579394 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0749d0fedc65cf0a8d939c6ee7a92ac2fafdc012bc41e3efc4fff3d0649a4dcb', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-11 15:19:41.579405 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e92bc0d29f7dd84f9a23cc54847bc51e1621fcac05bd0e6fddd5aac19f3c5ef1', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-11 15:19:41.579432 | orchestrator | skipping: [testbed-node-4] => (item={'id': '746020effd2194d3c57b02e1ccff559e529d12a4aea9aa12759bb60e30ef2471', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-11 15:19:41.579444 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5769cafcf7b4da35512a69e76a8c8b0b0ddf98d20a93edba2a6c5eddc9c9ba08', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-11 15:19:41.579455 | orchestrator | skipping: [testbed-node-4] => (item={'id': '74841e25d6d74704561f20b484e00856b218f8801dbce7b8a01990697e23f592', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-11 15:19:41.579466 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c857ebf702b98fe33d5aba4a32c20e00c79eec658da401d715c31c5777ea98e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-11 15:19:41.579482 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7773436ac5d038322d450ab39e6737d113db076e055b3dae338a092da3e97611', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-11 15:19:41.579493 | orchestrator | skipping: [testbed-node-5] => (item={'id': '576e38796361df777e2c683e80eac3066b999f0ee718898d0fd7aaeb63ccb8be', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-11 15:19:41.579505 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5a20f28e912cb4fd0858aeb89d1d5a6c3559e82e79a9e1ab661d8117d826cee1', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.579515 | orchestrator | skipping: [testbed-node-5] => (item={'id': '566cf9a165670dfeffda01614217d8481757729807c8ffafb05d677678db3ac8', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-11 15:19:41.579533 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8db5daebf9568458cddde02cf871567b8c4bf1750c089cebfddfc138c8f1161f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-11 15:19:41.579544 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b0af81374d21e38c41210af32a67b6b38d739f6d5aa225051cffaa9909a9fe9', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.579555 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad175d36043554c3d591c2afe2715514bb5f88864c6a848c1c18da756bd6aea8', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-11 15:19:41.579566 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6686110a6526dba0d38e9283de3c8aae212d4ed2dfd05115f08013ebab23e0bb', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-11 15:19:41.579577 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b5c7b3c1c769b8c5c34b30afc48545566d1b60ec2deec5029786e7a0ac8497ec', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-11 15:19:41.579588 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6840ab9d4193529f8c410ea2deafdce4124aa17af5bbb1892bd7fd2c74f3297b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-11 15:19:41.579599 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f152bdda9438a3b5d11bf74fc2921296e2aeff7b2bfe18ac430525084de748b1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-11 15:19:41.579616 | orchestrator | ok: [testbed-node-5] => (item={'id': '19cf7a70e10c9a7ba1a9ebd78bde75f5544a7bc9c9e0475558d0aa43065061b5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:49.966582 | orchestrator | ok: [testbed-node-5] => (item={'id': '4766030e1037af56c1929f0bf68ee403d1b084ddae9b5dc9416c9fa35e85ec42', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-11 15:19:49.966748 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67af15dcb0b43d48281635a5c0523627a83c5e77d0a9400a5d2ded4add9ca7b6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-11 15:19:49.966770 | orchestrator | skipping: [testbed-node-5] => (item={'id': '19e1fd278dfba309f29442bc4a5f5aba92bda24aef3266b9f71cde1091fb33f3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-11 15:19:49.966801 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2925a521d00378b92523d9b092746759e367d430d7f5cc280473bcfcdf48f456', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-11 15:19:49.966814 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e01529f906830314e2e824c32b07550a485889b22c10d461916d47b4a6c3bc2', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-11 15:19:49.966840 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6ef879ee9dac76f7191890a9dd75479759c9a70ebd75109625b7ffc23900aeb9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-11 15:19:49.966891 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0157e8b5b02e24d4ca3e5fd879f2dd62c9b7aea88fef5bccabd66f2b7e1c869f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-11 15:19:49.966904 | orchestrator | 2025-06-11 15:19:49.966918 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-11 15:19:49.966930 | orchestrator | Wednesday 11 June 2025 15:19:41 +0000 (0:00:00.510) 0:00:04.423 ******** 2025-06-11 15:19:49.966941 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.966956 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.966975 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.966993 | orchestrator | 2025-06-11 15:19:49.967011 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-11 15:19:49.967028 | orchestrator | Wednesday 11 June 2025 15:19:41 +0000 (0:00:00.291) 0:00:04.714 ******** 2025-06-11 15:19:49.967046 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967064 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:19:49.967082 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:19:49.967099 | orchestrator | 2025-06-11 15:19:49.967118 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-11 15:19:49.967138 | orchestrator | Wednesday 11 June 2025 15:19:42 +0000 (0:00:00.390) 0:00:05.105 ******** 2025-06-11 15:19:49.967158 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.967177 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.967190 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.967201 | orchestrator | 2025-06-11 15:19:49.967213 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:49.967224 | orchestrator | Wednesday 11 June 2025 15:19:42 +0000 (0:00:00.258) 0:00:05.363 ******** 2025-06-11 15:19:49.967236 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.967248 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.967260 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.967271 | orchestrator | 2025-06-11 15:19:49.967283 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-11 15:19:49.967294 | orchestrator | Wednesday 11 June 2025 15:19:42 +0000 (0:00:00.256) 0:00:05.620 ******** 2025-06-11 15:19:49.967306 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-11 15:19:49.967320 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-11 15:19:49.967332 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-11 15:19:49.967356 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-11 15:19:49.967368 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:19:49.967379 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-11 15:19:49.967392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-11 15:19:49.967403 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:19:49.967415 | orchestrator | 2025-06-11 15:19:49.967427 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-11 15:19:49.967439 | orchestrator | Wednesday 11 June 2025 15:19:43 +0000 (0:00:00.296) 0:00:05.917 ******** 2025-06-11 15:19:49.967450 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.967462 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.967475 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.967486 | orchestrator | 2025-06-11 15:19:49.967516 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-11 15:19:49.967528 | orchestrator | Wednesday 11 June 2025 15:19:43 +0000 (0:00:00.548) 0:00:06.465 ******** 2025-06-11 15:19:49.967539 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967549 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:19:49.967571 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:19:49.967582 | orchestrator | 2025-06-11 15:19:49.967593 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-11 15:19:49.967603 | orchestrator | Wednesday 11 June 2025 15:19:43 +0000 (0:00:00.290) 0:00:06.756 ******** 2025-06-11 15:19:49.967614 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967625 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:19:49.967635 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:19:49.967646 | orchestrator | 2025-06-11 15:19:49.967657 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-11 15:19:49.967667 | orchestrator | Wednesday 11 June 2025 15:19:44 +0000 (0:00:00.301) 0:00:07.058 ******** 2025-06-11 15:19:49.967704 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.967715 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.967726 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.967737 | orchestrator | 2025-06-11 15:19:49.967748 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:19:49.967758 | orchestrator | Wednesday 11 June 2025 15:19:44 +0000 (0:00:00.300) 0:00:07.358 ******** 2025-06-11 15:19:49.967770 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967780 | orchestrator | 2025-06-11 15:19:49.967791 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:19:49.967812 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.645) 0:00:08.004 ******** 2025-06-11 15:19:49.967823 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967834 | orchestrator | 2025-06-11 15:19:49.967844 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:19:49.967855 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.251) 0:00:08.255 ******** 2025-06-11 15:19:49.967866 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.967876 | orchestrator | 2025-06-11 15:19:49.967887 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:49.967898 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.251) 0:00:08.506 ******** 2025-06-11 15:19:49.967909 | orchestrator | 2025-06-11 15:19:49.967919 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:49.967930 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.077) 0:00:08.584 ******** 2025-06-11 15:19:49.967941 | orchestrator | 2025-06-11 15:19:49.967952 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:19:49.967962 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.077) 0:00:08.662 ******** 2025-06-11 15:19:49.967973 | orchestrator | 2025-06-11 15:19:49.967984 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:19:49.967994 | orchestrator | Wednesday 11 June 2025 15:19:45 +0000 (0:00:00.069) 0:00:08.731 ******** 2025-06-11 15:19:49.968005 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.968016 | orchestrator | 2025-06-11 15:19:49.968027 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-11 15:19:49.968037 | orchestrator | Wednesday 11 June 2025 15:19:46 +0000 (0:00:00.244) 0:00:08.976 ******** 2025-06-11 15:19:49.968048 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.968058 | orchestrator | 2025-06-11 15:19:49.968070 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:49.968080 | orchestrator | Wednesday 11 June 2025 15:19:46 +0000 (0:00:00.241) 0:00:09.217 ******** 2025-06-11 15:19:49.968091 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968101 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.968112 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.968127 | orchestrator | 2025-06-11 15:19:49.968145 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-11 15:19:49.968164 | orchestrator | Wednesday 11 June 2025 15:19:46 +0000 (0:00:00.316) 0:00:09.533 ******** 2025-06-11 15:19:49.968181 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968198 | orchestrator | 2025-06-11 15:19:49.968226 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-11 15:19:49.968246 | orchestrator | Wednesday 11 June 2025 15:19:47 +0000 (0:00:00.680) 0:00:10.214 ******** 2025-06-11 15:19:49.968264 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-11 15:19:49.968283 | orchestrator | 2025-06-11 15:19:49.968303 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-11 15:19:49.968321 | orchestrator | Wednesday 11 June 2025 15:19:48 +0000 (0:00:01.626) 0:00:11.840 ******** 2025-06-11 15:19:49.968338 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968357 | orchestrator | 2025-06-11 15:19:49.968373 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-11 15:19:49.968384 | orchestrator | Wednesday 11 June 2025 15:19:49 +0000 (0:00:00.130) 0:00:11.971 ******** 2025-06-11 15:19:49.968395 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968405 | orchestrator | 2025-06-11 15:19:49.968416 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-11 15:19:49.968427 | orchestrator | Wednesday 11 June 2025 15:19:49 +0000 (0:00:00.307) 0:00:12.278 ******** 2025-06-11 15:19:49.968437 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:19:49.968448 | orchestrator | 2025-06-11 15:19:49.968458 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-11 15:19:49.968469 | orchestrator | Wednesday 11 June 2025 15:19:49 +0000 (0:00:00.117) 0:00:12.395 ******** 2025-06-11 15:19:49.968479 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968490 | orchestrator | 2025-06-11 15:19:49.968500 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:19:49.968511 | orchestrator | Wednesday 11 June 2025 15:19:49 +0000 (0:00:00.136) 0:00:12.532 ******** 2025-06-11 15:19:49.968522 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:19:49.968532 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:19:49.968543 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:19:49.968553 | orchestrator | 2025-06-11 15:19:49.968564 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-11 15:19:49.968585 | orchestrator | Wednesday 11 June 2025 15:19:49 +0000 (0:00:00.276) 0:00:12.809 ******** 2025-06-11 15:20:02.587105 | orchestrator | changed: [testbed-node-3] 2025-06-11 15:20:02.587195 | orchestrator | changed: [testbed-node-4] 2025-06-11 15:20:02.587210 | orchestrator | changed: [testbed-node-5] 2025-06-11 15:20:02.587223 | orchestrator | 2025-06-11 15:20:02.587236 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-11 15:20:02.587248 | orchestrator | Wednesday 11 June 2025 15:19:52 +0000 (0:00:02.695) 0:00:15.504 ******** 2025-06-11 15:20:02.587260 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587272 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587282 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587293 | orchestrator | 2025-06-11 15:20:02.587305 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-11 15:20:02.587316 | orchestrator | Wednesday 11 June 2025 15:19:52 +0000 (0:00:00.318) 0:00:15.823 ******** 2025-06-11 15:20:02.587326 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587337 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587348 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587359 | orchestrator | 2025-06-11 15:20:02.587369 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-11 15:20:02.587380 | orchestrator | Wednesday 11 June 2025 15:19:53 +0000 (0:00:00.538) 0:00:16.362 ******** 2025-06-11 15:20:02.587391 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:20:02.587402 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:20:02.587420 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:20:02.587432 | orchestrator | 2025-06-11 15:20:02.587443 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-11 15:20:02.587454 | orchestrator | Wednesday 11 June 2025 15:19:53 +0000 (0:00:00.353) 0:00:16.715 ******** 2025-06-11 15:20:02.587465 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587492 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587503 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587514 | orchestrator | 2025-06-11 15:20:02.587525 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-11 15:20:02.587536 | orchestrator | Wednesday 11 June 2025 15:19:54 +0000 (0:00:00.657) 0:00:17.373 ******** 2025-06-11 15:20:02.587547 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:20:02.587558 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:20:02.587568 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:20:02.587579 | orchestrator | 2025-06-11 15:20:02.587590 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-11 15:20:02.587601 | orchestrator | Wednesday 11 June 2025 15:19:54 +0000 (0:00:00.365) 0:00:17.738 ******** 2025-06-11 15:20:02.587612 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:20:02.587623 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:20:02.587634 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:20:02.587645 | orchestrator | 2025-06-11 15:20:02.587655 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-11 15:20:02.587666 | orchestrator | Wednesday 11 June 2025 15:19:55 +0000 (0:00:00.325) 0:00:18.064 ******** 2025-06-11 15:20:02.587677 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587713 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587725 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587736 | orchestrator | 2025-06-11 15:20:02.587747 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-11 15:20:02.587758 | orchestrator | Wednesday 11 June 2025 15:19:55 +0000 (0:00:00.479) 0:00:18.543 ******** 2025-06-11 15:20:02.587769 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587780 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587790 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587801 | orchestrator | 2025-06-11 15:20:02.587812 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-11 15:20:02.587823 | orchestrator | Wednesday 11 June 2025 15:19:56 +0000 (0:00:00.844) 0:00:19.388 ******** 2025-06-11 15:20:02.587833 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.587844 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.587855 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.587865 | orchestrator | 2025-06-11 15:20:02.587876 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-11 15:20:02.587887 | orchestrator | Wednesday 11 June 2025 15:19:56 +0000 (0:00:00.367) 0:00:19.755 ******** 2025-06-11 15:20:02.587898 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:20:02.587909 | orchestrator | skipping: [testbed-node-4] 2025-06-11 15:20:02.587919 | orchestrator | skipping: [testbed-node-5] 2025-06-11 15:20:02.587930 | orchestrator | 2025-06-11 15:20:02.587941 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-11 15:20:02.587952 | orchestrator | Wednesday 11 June 2025 15:19:57 +0000 (0:00:00.300) 0:00:20.056 ******** 2025-06-11 15:20:02.588011 | orchestrator | ok: [testbed-node-3] 2025-06-11 15:20:02.588069 | orchestrator | ok: [testbed-node-4] 2025-06-11 15:20:02.588082 | orchestrator | ok: [testbed-node-5] 2025-06-11 15:20:02.588093 | orchestrator | 2025-06-11 15:20:02.588103 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-11 15:20:02.588114 | orchestrator | Wednesday 11 June 2025 15:19:57 +0000 (0:00:00.331) 0:00:20.387 ******** 2025-06-11 15:20:02.588125 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:20:02.588136 | orchestrator | 2025-06-11 15:20:02.588147 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-11 15:20:02.588158 | orchestrator | Wednesday 11 June 2025 15:19:58 +0000 (0:00:00.672) 0:00:21.060 ******** 2025-06-11 15:20:02.588169 | orchestrator | skipping: [testbed-node-3] 2025-06-11 15:20:02.588179 | orchestrator | 2025-06-11 15:20:02.588190 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-11 15:20:02.588201 | orchestrator | Wednesday 11 June 2025 15:19:58 +0000 (0:00:00.248) 0:00:21.309 ******** 2025-06-11 15:20:02.588221 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:20:02.588232 | orchestrator | 2025-06-11 15:20:02.588242 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-11 15:20:02.588253 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:01.668) 0:00:22.977 ******** 2025-06-11 15:20:02.588264 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:20:02.588275 | orchestrator | 2025-06-11 15:20:02.588286 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-11 15:20:02.588297 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:00.270) 0:00:23.248 ******** 2025-06-11 15:20:02.588325 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:20:02.588336 | orchestrator | 2025-06-11 15:20:02.588347 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:20:02.588358 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:00.247) 0:00:23.496 ******** 2025-06-11 15:20:02.588369 | orchestrator | 2025-06-11 15:20:02.588380 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:20:02.588390 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:00.063) 0:00:23.559 ******** 2025-06-11 15:20:02.588401 | orchestrator | 2025-06-11 15:20:02.588412 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-11 15:20:02.588422 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:00.061) 0:00:23.621 ******** 2025-06-11 15:20:02.588433 | orchestrator | 2025-06-11 15:20:02.588444 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-11 15:20:02.588454 | orchestrator | Wednesday 11 June 2025 15:20:00 +0000 (0:00:00.079) 0:00:23.700 ******** 2025-06-11 15:20:02.588465 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-11 15:20:02.588476 | orchestrator | 2025-06-11 15:20:02.588487 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-11 15:20:02.588503 | orchestrator | Wednesday 11 June 2025 15:20:01 +0000 (0:00:01.039) 0:00:24.739 ******** 2025-06-11 15:20:02.588514 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-11 15:20:02.588525 | orchestrator |  "msg": [ 2025-06-11 15:20:02.588536 | orchestrator |  "Validator run completed.", 2025-06-11 15:20:02.588547 | orchestrator |  "You can find the report file here:", 2025-06-11 15:20:02.588558 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-11T15:19:37+00:00-report.json", 2025-06-11 15:20:02.588570 | orchestrator |  "on the following host:", 2025-06-11 15:20:02.588581 | orchestrator |  "testbed-manager" 2025-06-11 15:20:02.588591 | orchestrator |  ] 2025-06-11 15:20:02.588603 | orchestrator | } 2025-06-11 15:20:02.588614 | orchestrator | 2025-06-11 15:20:02.588625 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:20:02.588637 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-11 15:20:02.588648 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-11 15:20:02.588659 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-11 15:20:02.588670 | orchestrator | 2025-06-11 15:20:02.588725 | orchestrator | 2025-06-11 15:20:02.588737 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:20:02.588748 | orchestrator | Wednesday 11 June 2025 15:20:02 +0000 (0:00:00.485) 0:00:25.225 ******** 2025-06-11 15:20:02.588759 | orchestrator | =============================================================================== 2025-06-11 15:20:02.588769 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.70s 2025-06-11 15:20:02.588780 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2025-06-11 15:20:02.588798 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.63s 2025-06-11 15:20:02.588809 | orchestrator | Write report file ------------------------------------------------------- 1.04s 2025-06-11 15:20:02.588820 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.84s 2025-06-11 15:20:02.588830 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2025-06-11 15:20:02.588841 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.68s 2025-06-11 15:20:02.588852 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.67s 2025-06-11 15:20:02.588862 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.66s 2025-06-11 15:20:02.588873 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-06-11 15:20:02.588883 | orchestrator | Get timestamp for report file ------------------------------------------- 0.60s 2025-06-11 15:20:02.588894 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.55s 2025-06-11 15:20:02.588905 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2025-06-11 15:20:02.588915 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.51s 2025-06-11 15:20:02.588926 | orchestrator | Print report file information ------------------------------------------- 0.49s 2025-06-11 15:20:02.588936 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-11 15:20:02.588947 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.47s 2025-06-11 15:20:02.588958 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-06-11 15:20:02.588968 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.39s 2025-06-11 15:20:02.588979 | orchestrator | Calculate sub test expression results ----------------------------------- 0.37s 2025-06-11 15:20:02.731111 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-11 15:20:02.736395 | orchestrator | + set -e 2025-06-11 15:20:02.737088 | orchestrator | + source /opt/manager-vars.sh 2025-06-11 15:20:02.737114 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-11 15:20:02.737141 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-11 15:20:02.737154 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-11 15:20:02.737166 | orchestrator | ++ CEPH_VERSION=reef 2025-06-11 15:20:02.737178 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-11 15:20:02.737191 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-11 15:20:02.737204 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 15:20:02.737216 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 15:20:02.737228 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-11 15:20:02.737240 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-11 15:20:02.737252 | orchestrator | ++ export ARA=false 2025-06-11 15:20:02.737264 | orchestrator | ++ ARA=false 2025-06-11 15:20:02.737276 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-11 15:20:02.737288 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-11 15:20:02.737300 | orchestrator | ++ export TEMPEST=false 2025-06-11 15:20:02.737311 | orchestrator | ++ TEMPEST=false 2025-06-11 15:20:02.737323 | orchestrator | ++ export IS_ZUUL=true 2025-06-11 15:20:02.737335 | orchestrator | ++ IS_ZUUL=true 2025-06-11 15:20:02.737347 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 15:20:02.737360 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.182 2025-06-11 15:20:02.737371 | orchestrator | ++ export EXTERNAL_API=false 2025-06-11 15:20:02.737382 | orchestrator | ++ EXTERNAL_API=false 2025-06-11 15:20:02.737392 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-11 15:20:02.737403 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-11 15:20:02.737414 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-11 15:20:02.737425 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-11 15:20:02.737436 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-11 15:20:02.737446 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-11 15:20:02.737457 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-11 15:20:02.737467 | orchestrator | + source /etc/os-release 2025-06-11 15:20:02.737478 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-11 15:20:02.737489 | orchestrator | ++ NAME=Ubuntu 2025-06-11 15:20:02.737500 | orchestrator | ++ VERSION_ID=24.04 2025-06-11 15:20:02.737510 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-11 15:20:02.737544 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-11 15:20:02.737555 | orchestrator | ++ ID=ubuntu 2025-06-11 15:20:02.737566 | orchestrator | ++ ID_LIKE=debian 2025-06-11 15:20:02.737577 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-11 15:20:02.737588 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-11 15:20:02.737599 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-11 15:20:02.737610 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-11 15:20:02.737622 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-11 15:20:02.737633 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-11 15:20:02.737643 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-11 15:20:02.737655 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-11 15:20:02.737667 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-11 15:20:02.758562 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-11 15:20:25.868077 | orchestrator | 2025-06-11 15:20:25.868195 | orchestrator | # Status of Elasticsearch 2025-06-11 15:20:25.868212 | orchestrator | 2025-06-11 15:20:25.868225 | orchestrator | + pushd /opt/configuration/contrib 2025-06-11 15:20:25.868238 | orchestrator | + echo 2025-06-11 15:20:25.868250 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-11 15:20:25.868261 | orchestrator | + echo 2025-06-11 15:20:25.868272 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-11 15:20:26.072829 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-11 15:20:26.072957 | orchestrator | 2025-06-11 15:20:26.072987 | orchestrator | # Status of MariaDB 2025-06-11 15:20:26.073010 | orchestrator | 2025-06-11 15:20:26.073030 | orchestrator | + echo 2025-06-11 15:20:26.073051 | orchestrator | + echo '# Status of MariaDB' 2025-06-11 15:20:26.073071 | orchestrator | + echo 2025-06-11 15:20:26.073090 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-11 15:20:26.073110 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-11 15:20:26.132615 | orchestrator | Reading package lists... 2025-06-11 15:20:26.457991 | orchestrator | Building dependency tree... 2025-06-11 15:20:26.458971 | orchestrator | Reading state information... 2025-06-11 15:20:26.832608 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-11 15:20:26.832710 | orchestrator | bc set to manually installed. 2025-06-11 15:20:26.832769 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-11 15:20:27.479233 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-11 15:20:27.479928 | orchestrator | 2025-06-11 15:20:27.479963 | orchestrator | # Status of Prometheus 2025-06-11 15:20:27.479976 | orchestrator | 2025-06-11 15:20:27.479988 | orchestrator | + echo 2025-06-11 15:20:27.480000 | orchestrator | + echo '# Status of Prometheus' 2025-06-11 15:20:27.480011 | orchestrator | + echo 2025-06-11 15:20:27.480023 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-11 15:20:27.551890 | orchestrator | Unauthorized 2025-06-11 15:20:27.556422 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-11 15:20:27.606652 | orchestrator | Unauthorized 2025-06-11 15:20:27.609816 | orchestrator | 2025-06-11 15:20:27.609864 | orchestrator | # Status of RabbitMQ 2025-06-11 15:20:27.609875 | orchestrator | 2025-06-11 15:20:27.609884 | orchestrator | + echo 2025-06-11 15:20:27.609892 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-11 15:20:27.609900 | orchestrator | + echo 2025-06-11 15:20:27.609910 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-11 15:20:28.036601 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-11 15:20:28.047142 | orchestrator | 2025-06-11 15:20:28.047199 | orchestrator | # Status of Redis 2025-06-11 15:20:28.047214 | orchestrator | 2025-06-11 15:20:28.047226 | orchestrator | + echo 2025-06-11 15:20:28.047238 | orchestrator | + echo '# Status of Redis' 2025-06-11 15:20:28.047251 | orchestrator | + echo 2025-06-11 15:20:28.047264 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-11 15:20:28.054392 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002006s;;;0.000000;10.000000 2025-06-11 15:20:28.055273 | orchestrator | 2025-06-11 15:20:28.055322 | orchestrator | # Create backup of MariaDB database 2025-06-11 15:20:28.055344 | orchestrator | 2025-06-11 15:20:28.055430 | orchestrator | + popd 2025-06-11 15:20:28.055452 | orchestrator | + echo 2025-06-11 15:20:28.055471 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-11 15:20:28.055490 | orchestrator | + echo 2025-06-11 15:20:28.055511 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-11 15:20:29.854422 | orchestrator | 2025-06-11 15:20:29 | INFO  | Task 8f1137a1-c5e6-41cb-b095-32c4489b2d02 (mariadb_backup) was prepared for execution. 2025-06-11 15:20:29.854538 | orchestrator | 2025-06-11 15:20:29 | INFO  | It takes a moment until task 8f1137a1-c5e6-41cb-b095-32c4489b2d02 (mariadb_backup) has been started and output is visible here. 2025-06-11 15:20:57.542588 | orchestrator | 2025-06-11 15:20:57.542706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-11 15:20:57.542723 | orchestrator | 2025-06-11 15:20:57.542736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-11 15:20:57.542748 | orchestrator | Wednesday 11 June 2025 15:20:33 +0000 (0:00:00.180) 0:00:00.180 ******** 2025-06-11 15:20:57.542760 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:20:57.542802 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:20:57.542814 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:20:57.542825 | orchestrator | 2025-06-11 15:20:57.542837 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-11 15:20:57.542848 | orchestrator | Wednesday 11 June 2025 15:20:34 +0000 (0:00:00.317) 0:00:00.498 ******** 2025-06-11 15:20:57.542859 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-11 15:20:57.542871 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-11 15:20:57.542882 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-11 15:20:57.542893 | orchestrator | 2025-06-11 15:20:57.542904 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-11 15:20:57.542915 | orchestrator | 2025-06-11 15:20:57.542926 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-11 15:20:57.542937 | orchestrator | Wednesday 11 June 2025 15:20:34 +0000 (0:00:00.578) 0:00:01.077 ******** 2025-06-11 15:20:57.542948 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-11 15:20:57.542959 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-11 15:20:57.542970 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-11 15:20:57.542981 | orchestrator | 2025-06-11 15:20:57.542992 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-11 15:20:57.543017 | orchestrator | Wednesday 11 June 2025 15:20:35 +0000 (0:00:00.414) 0:00:01.491 ******** 2025-06-11 15:20:57.543030 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-11 15:20:57.543079 | orchestrator | 2025-06-11 15:20:57.543091 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-11 15:20:57.543102 | orchestrator | Wednesday 11 June 2025 15:20:35 +0000 (0:00:00.522) 0:00:02.014 ******** 2025-06-11 15:20:57.543113 | orchestrator | ok: [testbed-node-0] 2025-06-11 15:20:57.543125 | orchestrator | ok: [testbed-node-1] 2025-06-11 15:20:57.543136 | orchestrator | ok: [testbed-node-2] 2025-06-11 15:20:57.543149 | orchestrator | 2025-06-11 15:20:57.543162 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-11 15:20:57.543174 | orchestrator | Wednesday 11 June 2025 15:20:38 +0000 (0:00:03.150) 0:00:05.164 ******** 2025-06-11 15:20:57.543187 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-11 15:20:57.543200 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-11 15:20:57.543237 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-11 15:20:57.543250 | orchestrator | mariadb_bootstrap_restart 2025-06-11 15:20:57.543263 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:20:57.543274 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:20:57.543285 | orchestrator | changed: [testbed-node-0] 2025-06-11 15:20:57.543295 | orchestrator | 2025-06-11 15:20:57.543306 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-11 15:20:57.543317 | orchestrator | skipping: no hosts matched 2025-06-11 15:20:57.543328 | orchestrator | 2025-06-11 15:20:57.543338 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-11 15:20:57.543349 | orchestrator | skipping: no hosts matched 2025-06-11 15:20:57.543360 | orchestrator | 2025-06-11 15:20:57.543371 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-11 15:20:57.543381 | orchestrator | skipping: no hosts matched 2025-06-11 15:20:57.543392 | orchestrator | 2025-06-11 15:20:57.543403 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-11 15:20:57.543414 | orchestrator | 2025-06-11 15:20:57.543425 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-11 15:20:57.543435 | orchestrator | Wednesday 11 June 2025 15:20:56 +0000 (0:00:17.813) 0:00:22.978 ******** 2025-06-11 15:20:57.543446 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:20:57.543457 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:20:57.543468 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:20:57.543478 | orchestrator | 2025-06-11 15:20:57.543489 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-11 15:20:57.543500 | orchestrator | Wednesday 11 June 2025 15:20:56 +0000 (0:00:00.286) 0:00:23.264 ******** 2025-06-11 15:20:57.543511 | orchestrator | skipping: [testbed-node-0] 2025-06-11 15:20:57.543521 | orchestrator | skipping: [testbed-node-1] 2025-06-11 15:20:57.543532 | orchestrator | skipping: [testbed-node-2] 2025-06-11 15:20:57.543543 | orchestrator | 2025-06-11 15:20:57.543554 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:20:57.543566 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-11 15:20:57.543577 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 15:20:57.543588 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-11 15:20:57.543599 | orchestrator | 2025-06-11 15:20:57.543611 | orchestrator | 2025-06-11 15:20:57.543622 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:20:57.543632 | orchestrator | Wednesday 11 June 2025 15:20:57 +0000 (0:00:00.398) 0:00:23.662 ******** 2025-06-11 15:20:57.543643 | orchestrator | =============================================================================== 2025-06-11 15:20:57.543654 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.81s 2025-06-11 15:20:57.543681 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.15s 2025-06-11 15:20:57.543692 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-06-11 15:20:57.543703 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-06-11 15:20:57.543714 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-06-11 15:20:57.543724 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-06-11 15:20:57.543735 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-11 15:20:57.543746 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-06-11 15:20:57.779872 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-11 15:20:57.788455 | orchestrator | + set -e 2025-06-11 15:20:57.788498 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-11 15:20:57.788513 | orchestrator | ++ export INTERACTIVE=false 2025-06-11 15:20:57.788525 | orchestrator | ++ INTERACTIVE=false 2025-06-11 15:20:57.788536 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-11 15:20:57.788547 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-11 15:20:57.788558 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-11 15:20:57.789154 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-11 15:20:57.795418 | orchestrator | 2025-06-11 15:20:57.795459 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-11 15:20:57.795471 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-11 15:20:57.795482 | orchestrator | + export OS_CLOUD=admin 2025-06-11 15:20:57.795493 | orchestrator | + OS_CLOUD=admin 2025-06-11 15:20:57.795504 | orchestrator | + echo 2025-06-11 15:20:57.795515 | orchestrator | + echo '# OpenStack endpoints' 2025-06-11 15:20:57.795526 | orchestrator | # OpenStack endpoints 2025-06-11 15:20:57.795537 | orchestrator | 2025-06-11 15:20:57.795548 | orchestrator | + echo 2025-06-11 15:20:57.795559 | orchestrator | + openstack endpoint list 2025-06-11 15:21:01.218735 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-11 15:21:01.218929 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-11 15:21:01.218946 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-11 15:21:01.218958 | orchestrator | | 0195021478a84f79b0ffe96750a3f39d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-11 15:21:01.218969 | orchestrator | | 01a9fb6316b34b1ba15457023b5ddb94 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-11 15:21:01.218980 | orchestrator | | 10d4519141b34a5da7ad07f5e54ee615 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-11 15:21:01.218991 | orchestrator | | 2aa00a3ef80a469890d4b2f3f2629155 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-11 15:21:01.219001 | orchestrator | | 5559ab0f9549410f8335e5178a08183d | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-11 15:21:01.219012 | orchestrator | | 55f7d023ed314590a7f48467e25c58ea | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-11 15:21:01.219022 | orchestrator | | 6031f1bf77b8438ba2c2b12c66ea0710 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-11 15:21:01.219033 | orchestrator | | 622d44cb14da4b539d7d78113f3446c0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-11 15:21:01.219044 | orchestrator | | 6f377d2ff8214afaaa62ff9159f6fdf2 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-11 15:21:01.219054 | orchestrator | | 776fb2b029a449eebbde70e73d5ee4bf | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-11 15:21:01.219065 | orchestrator | | 794949ba098f456c8eeb643d6cb3e611 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-11 15:21:01.219077 | orchestrator | | 82e139027c814ccebc350e1caa5a7468 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-11 15:21:01.219112 | orchestrator | | 8830bb24fdfa479389967acf42bf1fb4 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-11 15:21:01.219124 | orchestrator | | 98ae4e41d87e475d96b541960635e5a6 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-11 15:21:01.219134 | orchestrator | | 9f0e33fcbf9b44029263e9e180742f2a | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-11 15:21:01.219145 | orchestrator | | a052b86ced0145f9a921bf2196869369 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-11 15:21:01.219156 | orchestrator | | ae74391a280c4b099e50e76d6b49c081 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-11 15:21:01.219167 | orchestrator | | c50cd7d5a25c40f9b59c0478dc942bdc | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-11 15:21:01.219177 | orchestrator | | c922ea7d747a481e99e47b4144424cc5 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-11 15:21:01.219188 | orchestrator | | d5f4a0c3c2fe4be18c30a46e267dcceb | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-11 15:21:01.219219 | orchestrator | | dfec0645c83045b88bbbf29aebcbf8ae | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-11 15:21:01.219238 | orchestrator | | e4f91e9535bb4351a11544c42bcf5cdc | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-11 15:21:01.219250 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-11 15:21:01.481045 | orchestrator | 2025-06-11 15:21:01.481146 | orchestrator | # Cinder 2025-06-11 15:21:01.481162 | orchestrator | 2025-06-11 15:21:01.481174 | orchestrator | + echo 2025-06-11 15:21:01.481186 | orchestrator | + echo '# Cinder' 2025-06-11 15:21:01.481198 | orchestrator | + echo 2025-06-11 15:21:01.481209 | orchestrator | + openstack volume service list 2025-06-11 15:21:04.650328 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:04.650441 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-11 15:21:04.650457 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:04.650468 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-11T15:20:55.000000 | 2025-06-11 15:21:04.650480 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-11T15:20:55.000000 | 2025-06-11 15:21:04.650491 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-11T15:20:54.000000 | 2025-06-11 15:21:04.650502 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-11T15:20:54.000000 | 2025-06-11 15:21:04.650512 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-11T15:20:54.000000 | 2025-06-11 15:21:04.650523 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-11T15:20:55.000000 | 2025-06-11 15:21:04.650534 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-11T15:20:59.000000 | 2025-06-11 15:21:04.650545 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-11T15:20:59.000000 | 2025-06-11 15:21:04.650556 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-11T15:20:59.000000 | 2025-06-11 15:21:04.650647 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:04.949331 | orchestrator | 2025-06-11 15:21:04.949493 | orchestrator | # Neutron 2025-06-11 15:21:04.949510 | orchestrator | 2025-06-11 15:21:04.949521 | orchestrator | + echo 2025-06-11 15:21:04.949532 | orchestrator | + echo '# Neutron' 2025-06-11 15:21:04.949544 | orchestrator | + echo 2025-06-11 15:21:04.949554 | orchestrator | + openstack network agent list 2025-06-11 15:21:08.488407 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-11 15:21:08.488514 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-11 15:21:08.488536 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-11 15:21:08.488555 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488574 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488591 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488607 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488624 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488643 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-11 15:21:08.488659 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-11 15:21:08.488675 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-11 15:21:08.488692 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-11 15:21:08.488709 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-11 15:21:08.739765 | orchestrator | + openstack network service provider list 2025-06-11 15:21:11.894138 | orchestrator | +---------------+------+---------+ 2025-06-11 15:21:11.894324 | orchestrator | | Service Type | Name | Default | 2025-06-11 15:21:11.894341 | orchestrator | +---------------+------+---------+ 2025-06-11 15:21:11.894352 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-11 15:21:11.894363 | orchestrator | +---------------+------+---------+ 2025-06-11 15:21:12.143672 | orchestrator | 2025-06-11 15:21:12.143773 | orchestrator | # Nova 2025-06-11 15:21:12.143840 | orchestrator | 2025-06-11 15:21:12.143854 | orchestrator | + echo 2025-06-11 15:21:12.143865 | orchestrator | + echo '# Nova' 2025-06-11 15:21:12.143877 | orchestrator | + echo 2025-06-11 15:21:12.143888 | orchestrator | + openstack compute service list 2025-06-11 15:21:14.755915 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:14.756055 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-11 15:21:14.756082 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:14.756103 | orchestrator | | fbb90788-b46d-4f7d-86f2-45fd15f2e791 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-11T15:21:10.000000 | 2025-06-11 15:21:14.756163 | orchestrator | | 8ba8b503-7142-47ff-b0e1-6ff33492c74b | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-11T15:21:13.000000 | 2025-06-11 15:21:14.756176 | orchestrator | | 2e96abda-ec4e-46ed-8127-5f93af35a9ce | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-11T15:21:13.000000 | 2025-06-11 15:21:14.756186 | orchestrator | | be2a103a-875f-4e38-a0c1-dab4034440d8 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-11T15:21:11.000000 | 2025-06-11 15:21:14.756197 | orchestrator | | c75d0e70-d7de-47bc-a146-a94052fc3cca | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-11T15:21:13.000000 | 2025-06-11 15:21:14.756208 | orchestrator | | 81ee8e0c-1d1a-47e5-af46-a97c0b654ae5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-11T15:21:13.000000 | 2025-06-11 15:21:14.756219 | orchestrator | | 24658276-364c-4537-beb0-4e38d4929d9e | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-11T15:21:08.000000 | 2025-06-11 15:21:14.756229 | orchestrator | | 3a939a79-4bc2-4165-988d-01826131933c | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-11T15:21:08.000000 | 2025-06-11 15:21:14.756240 | orchestrator | | 39043d52-6d34-4172-8aa7-a48328303c88 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-11T15:21:09.000000 | 2025-06-11 15:21:14.756251 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-11 15:21:14.999618 | orchestrator | + openstack hypervisor list 2025-06-11 15:21:19.317673 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-11 15:21:19.317847 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-11 15:21:19.317874 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-11 15:21:19.317894 | orchestrator | | be76068e-55fe-400a-bdf7-14bb41dfb906 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-11 15:21:19.317912 | orchestrator | | c409d9b2-52f3-4666-be9e-533c189c4b02 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-11 15:21:19.317959 | orchestrator | | 8fb864e3-af12-450c-b4e9-9e20a41de8fe | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-11 15:21:19.317976 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-11 15:21:19.603949 | orchestrator | 2025-06-11 15:21:19.604050 | orchestrator | # Run OpenStack test play 2025-06-11 15:21:19.604065 | orchestrator | 2025-06-11 15:21:19.604078 | orchestrator | + echo 2025-06-11 15:21:19.604091 | orchestrator | + echo '# Run OpenStack test play' 2025-06-11 15:21:19.604103 | orchestrator | + echo 2025-06-11 15:21:19.604115 | orchestrator | + osism apply --environment openstack test 2025-06-11 15:21:21.279106 | orchestrator | 2025-06-11 15:21:21 | INFO  | Trying to run play test in environment openstack 2025-06-11 15:21:21.283728 | orchestrator | Registering Redlock._acquired_script 2025-06-11 15:21:21.283763 | orchestrator | Registering Redlock._extend_script 2025-06-11 15:21:21.283774 | orchestrator | Registering Redlock._release_script 2025-06-11 15:21:21.352399 | orchestrator | 2025-06-11 15:21:21 | INFO  | Task 5af98c98-a327-49e2-8b74-2292e852a636 (test) was prepared for execution. 2025-06-11 15:21:21.352484 | orchestrator | 2025-06-11 15:21:21 | INFO  | It takes a moment until task 5af98c98-a327-49e2-8b74-2292e852a636 (test) has been started and output is visible here. 2025-06-11 15:27:19.670332 | orchestrator | 2025-06-11 15:27:19.670458 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-11 15:27:19.670474 | orchestrator | 2025-06-11 15:27:19.670486 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-11 15:27:19.670497 | orchestrator | Wednesday 11 June 2025 15:21:25 +0000 (0:00:00.076) 0:00:00.076 ******** 2025-06-11 15:27:19.670507 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670517 | orchestrator | 2025-06-11 15:27:19.670527 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-11 15:27:19.670557 | orchestrator | Wednesday 11 June 2025 15:21:28 +0000 (0:00:03.540) 0:00:03.617 ******** 2025-06-11 15:27:19.670568 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670578 | orchestrator | 2025-06-11 15:27:19.670587 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-11 15:27:19.670597 | orchestrator | Wednesday 11 June 2025 15:21:32 +0000 (0:00:04.179) 0:00:07.797 ******** 2025-06-11 15:27:19.670607 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670616 | orchestrator | 2025-06-11 15:27:19.670626 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-11 15:27:19.670636 | orchestrator | Wednesday 11 June 2025 15:21:38 +0000 (0:00:05.932) 0:00:13.729 ******** 2025-06-11 15:27:19.670645 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670655 | orchestrator | 2025-06-11 15:27:19.670678 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-11 15:27:19.670688 | orchestrator | Wednesday 11 June 2025 15:21:42 +0000 (0:00:03.647) 0:00:17.377 ******** 2025-06-11 15:27:19.670698 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670707 | orchestrator | 2025-06-11 15:27:19.670717 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-11 15:27:19.670727 | orchestrator | Wednesday 11 June 2025 15:21:46 +0000 (0:00:04.041) 0:00:21.418 ******** 2025-06-11 15:27:19.670737 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-11 15:27:19.670747 | orchestrator | changed: [localhost] => (item=member) 2025-06-11 15:27:19.670757 | orchestrator | changed: [localhost] => (item=creator) 2025-06-11 15:27:19.670767 | orchestrator | 2025-06-11 15:27:19.670776 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-11 15:27:19.670787 | orchestrator | Wednesday 11 June 2025 15:21:58 +0000 (0:00:11.841) 0:00:33.260 ******** 2025-06-11 15:27:19.670798 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670808 | orchestrator | 2025-06-11 15:27:19.670819 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-11 15:27:19.670830 | orchestrator | Wednesday 11 June 2025 15:22:03 +0000 (0:00:04.691) 0:00:37.951 ******** 2025-06-11 15:27:19.670840 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670851 | orchestrator | 2025-06-11 15:27:19.670862 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-11 15:27:19.670874 | orchestrator | Wednesday 11 June 2025 15:22:08 +0000 (0:00:05.637) 0:00:43.589 ******** 2025-06-11 15:27:19.670885 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670896 | orchestrator | 2025-06-11 15:27:19.670905 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-11 15:27:19.670915 | orchestrator | Wednesday 11 June 2025 15:22:12 +0000 (0:00:04.218) 0:00:47.807 ******** 2025-06-11 15:27:19.670925 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670934 | orchestrator | 2025-06-11 15:27:19.670944 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-11 15:27:19.670953 | orchestrator | Wednesday 11 June 2025 15:22:16 +0000 (0:00:03.575) 0:00:51.383 ******** 2025-06-11 15:27:19.670963 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.670973 | orchestrator | 2025-06-11 15:27:19.670982 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-11 15:27:19.670992 | orchestrator | Wednesday 11 June 2025 15:22:20 +0000 (0:00:04.023) 0:00:55.407 ******** 2025-06-11 15:27:19.671001 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.671011 | orchestrator | 2025-06-11 15:27:19.671021 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-11 15:27:19.671030 | orchestrator | Wednesday 11 June 2025 15:22:24 +0000 (0:00:04.310) 0:00:59.717 ******** 2025-06-11 15:27:19.671040 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.671049 | orchestrator | 2025-06-11 15:27:19.671059 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-11 15:27:19.671068 | orchestrator | Wednesday 11 June 2025 15:22:40 +0000 (0:00:15.944) 0:01:15.662 ******** 2025-06-11 15:27:19.671086 | orchestrator | changed: [localhost] => (item=test) 2025-06-11 15:27:19.671096 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-11 15:27:19.671106 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-11 15:27:19.671115 | orchestrator | 2025-06-11 15:27:19.671125 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-11 15:27:19.671134 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-11 15:27:19.671144 | orchestrator | 2025-06-11 15:27:19.671153 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-11 15:27:19.671163 | orchestrator | 2025-06-11 15:27:19.671173 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-11 15:27:19.671183 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-11 15:27:19.671218 | orchestrator | 2025-06-11 15:27:19.671228 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-11 15:27:19.671238 | orchestrator | Wednesday 11 June 2025 15:25:57 +0000 (0:03:16.652) 0:04:32.315 ******** 2025-06-11 15:27:19.671248 | orchestrator | changed: [localhost] => (item=test) 2025-06-11 15:27:19.671257 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-11 15:27:19.671267 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-11 15:27:19.671277 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-11 15:27:19.671286 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-11 15:27:19.671296 | orchestrator | 2025-06-11 15:27:19.671305 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-11 15:27:19.671315 | orchestrator | Wednesday 11 June 2025 15:26:21 +0000 (0:00:24.239) 0:04:56.554 ******** 2025-06-11 15:27:19.671325 | orchestrator | changed: [localhost] => (item=test) 2025-06-11 15:27:19.671335 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-11 15:27:19.671362 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-11 15:27:19.671372 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-11 15:27:19.671381 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-11 15:27:19.671391 | orchestrator | 2025-06-11 15:27:19.671401 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-11 15:27:19.671411 | orchestrator | Wednesday 11 June 2025 15:26:53 +0000 (0:00:32.058) 0:05:28.613 ******** 2025-06-11 15:27:19.671424 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.671434 | orchestrator | 2025-06-11 15:27:19.671444 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-11 15:27:19.671453 | orchestrator | Wednesday 11 June 2025 15:27:00 +0000 (0:00:06.776) 0:05:35.389 ******** 2025-06-11 15:27:19.671463 | orchestrator | changed: [localhost] 2025-06-11 15:27:19.671472 | orchestrator | 2025-06-11 15:27:19.671482 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-11 15:27:19.671492 | orchestrator | Wednesday 11 June 2025 15:27:14 +0000 (0:00:13.623) 0:05:49.013 ******** 2025-06-11 15:27:19.671502 | orchestrator | ok: [localhost] 2025-06-11 15:27:19.671511 | orchestrator | 2025-06-11 15:27:19.671521 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-11 15:27:19.671530 | orchestrator | Wednesday 11 June 2025 15:27:19 +0000 (0:00:05.204) 0:05:54.217 ******** 2025-06-11 15:27:19.671540 | orchestrator | ok: [localhost] => { 2025-06-11 15:27:19.671555 | orchestrator |  "msg": "192.168.112.131" 2025-06-11 15:27:19.671565 | orchestrator | } 2025-06-11 15:27:19.671575 | orchestrator | 2025-06-11 15:27:19.671585 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-11 15:27:19.671595 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-11 15:27:19.671605 | orchestrator | 2025-06-11 15:27:19.671615 | orchestrator | 2025-06-11 15:27:19.671625 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-11 15:27:19.671634 | orchestrator | Wednesday 11 June 2025 15:27:19 +0000 (0:00:00.037) 0:05:54.255 ******** 2025-06-11 15:27:19.671644 | orchestrator | =============================================================================== 2025-06-11 15:27:19.671660 | orchestrator | Create test instances ------------------------------------------------- 196.65s 2025-06-11 15:27:19.671670 | orchestrator | Add tag to instances --------------------------------------------------- 32.06s 2025-06-11 15:27:19.671679 | orchestrator | Add metadata to instances ---------------------------------------------- 24.24s 2025-06-11 15:27:19.671689 | orchestrator | Create test network topology ------------------------------------------- 15.94s 2025-06-11 15:27:19.671698 | orchestrator | Attach test volume ----------------------------------------------------- 13.62s 2025-06-11 15:27:19.671708 | orchestrator | Add member roles to user test ------------------------------------------ 11.84s 2025-06-11 15:27:19.671717 | orchestrator | Create test volume ------------------------------------------------------ 6.78s 2025-06-11 15:27:19.671727 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.93s 2025-06-11 15:27:19.671737 | orchestrator | Create ssh security group ----------------------------------------------- 5.64s 2025-06-11 15:27:19.671746 | orchestrator | Create floating ip address ---------------------------------------------- 5.20s 2025-06-11 15:27:19.671756 | orchestrator | Create test server group ------------------------------------------------ 4.69s 2025-06-11 15:27:19.671765 | orchestrator | Create test keypair ----------------------------------------------------- 4.31s 2025-06-11 15:27:19.671775 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.22s 2025-06-11 15:27:19.671784 | orchestrator | Create test-admin user -------------------------------------------------- 4.18s 2025-06-11 15:27:19.671794 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-06-11 15:27:19.671803 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.02s 2025-06-11 15:27:19.671812 | orchestrator | Create test project ----------------------------------------------------- 3.65s 2025-06-11 15:27:19.671822 | orchestrator | Create icmp security group ---------------------------------------------- 3.58s 2025-06-11 15:27:19.671831 | orchestrator | Create test domain ------------------------------------------------------ 3.54s 2025-06-11 15:27:19.671841 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-11 15:27:19.900480 | orchestrator | + server_list 2025-06-11 15:27:19.900577 | orchestrator | + openstack --os-cloud test server list 2025-06-11 15:27:23.716131 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-11 15:27:23.716277 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-11 15:27:23.716293 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-11 15:27:23.716305 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | auto_allocated_network=10.42.0.26, 192.168.112.117 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-11 15:27:23.716317 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.116 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-11 15:27:23.716328 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.161 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-11 15:27:23.716339 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | auto_allocated_network=10.42.0.38, 192.168.112.190 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-11 15:27:23.716350 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | auto_allocated_network=10.42.0.52, 192.168.112.131 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-11 15:27:23.716361 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-11 15:27:23.970399 | orchestrator | + openstack --os-cloud test server show test 2025-06-11 15:27:27.352614 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:27.352754 | orchestrator | | Field | Value | 2025-06-11 15:27:27.352779 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:27.352799 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-11 15:27:27.352818 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-11 15:27:27.352837 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-11 15:27:27.352857 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-11 15:27:27.352871 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-11 15:27:27.352882 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-11 15:27:27.352894 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-11 15:27:27.352905 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-11 15:27:27.352942 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-11 15:27:27.352976 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-11 15:27:27.353003 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-11 15:27:27.353023 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-11 15:27:27.353042 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-11 15:27:27.353061 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-11 15:27:27.353134 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-11 15:27:27.353147 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-11T15:23:11.000000 | 2025-06-11 15:27:27.353159 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-11 15:27:27.353171 | orchestrator | | accessIPv4 | | 2025-06-11 15:27:27.353184 | orchestrator | | accessIPv6 | | 2025-06-11 15:27:27.353234 | orchestrator | | addresses | auto_allocated_network=10.42.0.52, 192.168.112.131 | 2025-06-11 15:27:27.353258 | orchestrator | | config_drive | | 2025-06-11 15:27:27.353271 | orchestrator | | created | 2025-06-11T15:22:49Z | 2025-06-11 15:27:27.353284 | orchestrator | | description | None | 2025-06-11 15:27:27.353311 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-11 15:27:27.353323 | orchestrator | | hostId | 4bba3610ae90c6a03b6bed69ae2108327c846f7919898fa960bcde1f | 2025-06-11 15:27:27.353335 | orchestrator | | host_status | None | 2025-06-11 15:27:27.353347 | orchestrator | | id | f12f0304-acc9-49fd-b821-30c4e7682b29 | 2025-06-11 15:27:27.353359 | orchestrator | | image | Cirros 0.6.2 (9b0a6c78-ebe7-4d89-b023-f1a4ead6e10c) | 2025-06-11 15:27:27.353372 | orchestrator | | key_name | test | 2025-06-11 15:27:27.353384 | orchestrator | | locked | False | 2025-06-11 15:27:27.353404 | orchestrator | | locked_reason | None | 2025-06-11 15:27:27.353416 | orchestrator | | name | test | 2025-06-11 15:27:27.353439 | orchestrator | | pinned_availability_zone | None | 2025-06-11 15:27:27.353465 | orchestrator | | progress | 0 | 2025-06-11 15:27:27.353486 | orchestrator | | project_id | 12b47844c1da49d9a557996d18839fc3 | 2025-06-11 15:27:27.353505 | orchestrator | | properties | hostname='test' | 2025-06-11 15:27:27.353521 | orchestrator | | security_groups | name='ssh' | 2025-06-11 15:27:27.353532 | orchestrator | | | name='icmp' | 2025-06-11 15:27:27.353543 | orchestrator | | server_groups | None | 2025-06-11 15:27:27.353554 | orchestrator | | status | ACTIVE | 2025-06-11 15:27:27.353564 | orchestrator | | tags | test | 2025-06-11 15:27:27.353582 | orchestrator | | trusted_image_certificates | None | 2025-06-11 15:27:27.353593 | orchestrator | | updated | 2025-06-11T15:26:02Z | 2025-06-11 15:27:27.353610 | orchestrator | | user_id | ded7b4603f974d1094248c7dc763c613 | 2025-06-11 15:27:27.353621 | orchestrator | | volumes_attached | delete_on_termination='False', id='1139f370-fd16-4cd7-b7b7-82a540e223cc' | 2025-06-11 15:27:27.357273 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:27.647100 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-11 15:27:30.919966 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:30.920078 | orchestrator | | Field | Value | 2025-06-11 15:27:30.920094 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:30.920105 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-11 15:27:30.920119 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-11 15:27:30.920137 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-11 15:27:30.920180 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-11 15:27:30.920288 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-11 15:27:30.920309 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-11 15:27:30.920327 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-11 15:27:30.920345 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-11 15:27:30.920401 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-11 15:27:30.920421 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-11 15:27:30.920433 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-11 15:27:30.920443 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-11 15:27:30.920454 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-11 15:27:30.920474 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-11 15:27:30.920485 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-11 15:27:30.920496 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-11T15:23:55.000000 | 2025-06-11 15:27:30.920507 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-11 15:27:30.920518 | orchestrator | | accessIPv4 | | 2025-06-11 15:27:30.920529 | orchestrator | | accessIPv6 | | 2025-06-11 15:27:30.920544 | orchestrator | | addresses | auto_allocated_network=10.42.0.38, 192.168.112.190 | 2025-06-11 15:27:30.920562 | orchestrator | | config_drive | | 2025-06-11 15:27:30.920574 | orchestrator | | created | 2025-06-11T15:23:33Z | 2025-06-11 15:27:30.920585 | orchestrator | | description | None | 2025-06-11 15:27:30.920595 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-11 15:27:30.920613 | orchestrator | | hostId | 1ed7f01b10be85da30ba8af1b1a90d06076feab5707837d2e224ef99 | 2025-06-11 15:27:30.920624 | orchestrator | | host_status | None | 2025-06-11 15:27:30.920634 | orchestrator | | id | 2800f509-8146-4410-8a2b-b1f46d8b4157 | 2025-06-11 15:27:30.920645 | orchestrator | | image | Cirros 0.6.2 (9b0a6c78-ebe7-4d89-b023-f1a4ead6e10c) | 2025-06-11 15:27:30.920656 | orchestrator | | key_name | test | 2025-06-11 15:27:30.920667 | orchestrator | | locked | False | 2025-06-11 15:27:30.920677 | orchestrator | | locked_reason | None | 2025-06-11 15:27:30.920689 | orchestrator | | name | test-1 | 2025-06-11 15:27:30.920706 | orchestrator | | pinned_availability_zone | None | 2025-06-11 15:27:30.920724 | orchestrator | | progress | 0 | 2025-06-11 15:27:30.920736 | orchestrator | | project_id | 12b47844c1da49d9a557996d18839fc3 | 2025-06-11 15:27:30.920752 | orchestrator | | properties | hostname='test-1' | 2025-06-11 15:27:30.920763 | orchestrator | | security_groups | name='ssh' | 2025-06-11 15:27:30.920774 | orchestrator | | | name='icmp' | 2025-06-11 15:27:30.920784 | orchestrator | | server_groups | None | 2025-06-11 15:27:30.920795 | orchestrator | | status | ACTIVE | 2025-06-11 15:27:30.920806 | orchestrator | | tags | test | 2025-06-11 15:27:30.920817 | orchestrator | | trusted_image_certificates | None | 2025-06-11 15:27:30.920828 | orchestrator | | updated | 2025-06-11T15:26:07Z | 2025-06-11 15:27:30.920848 | orchestrator | | user_id | ded7b4603f974d1094248c7dc763c613 | 2025-06-11 15:27:30.920859 | orchestrator | | volumes_attached | | 2025-06-11 15:27:30.924079 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:31.201856 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-11 15:27:34.545183 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:34.545329 | orchestrator | | Field | Value | 2025-06-11 15:27:34.545347 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:34.545358 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-11 15:27:34.545370 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-11 15:27:34.545381 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-11 15:27:34.545392 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-11 15:27:34.545403 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-11 15:27:34.545423 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-11 15:27:34.545435 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-11 15:27:34.545463 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-11 15:27:34.545491 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-11 15:27:34.545503 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-11 15:27:34.545514 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-11 15:27:34.545525 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-11 15:27:34.545536 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-11 15:27:34.545547 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-11 15:27:34.545558 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-11 15:27:34.545569 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-11T15:24:34.000000 | 2025-06-11 15:27:34.545580 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-11 15:27:34.545595 | orchestrator | | accessIPv4 | | 2025-06-11 15:27:34.545648 | orchestrator | | accessIPv6 | | 2025-06-11 15:27:34.545661 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.161 | 2025-06-11 15:27:34.545680 | orchestrator | | config_drive | | 2025-06-11 15:27:34.545691 | orchestrator | | created | 2025-06-11T15:24:13Z | 2025-06-11 15:27:34.545703 | orchestrator | | description | None | 2025-06-11 15:27:34.545716 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-11 15:27:34.545728 | orchestrator | | hostId | 20dce43cc996643e8e6c83d9322229f92f5308f8ddc834466371e519 | 2025-06-11 15:27:34.545740 | orchestrator | | host_status | None | 2025-06-11 15:27:34.545752 | orchestrator | | id | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | 2025-06-11 15:27:34.545764 | orchestrator | | image | Cirros 0.6.2 (9b0a6c78-ebe7-4d89-b023-f1a4ead6e10c) | 2025-06-11 15:27:34.545781 | orchestrator | | key_name | test | 2025-06-11 15:27:34.545801 | orchestrator | | locked | False | 2025-06-11 15:27:34.545815 | orchestrator | | locked_reason | None | 2025-06-11 15:27:34.545828 | orchestrator | | name | test-2 | 2025-06-11 15:27:34.545846 | orchestrator | | pinned_availability_zone | None | 2025-06-11 15:27:34.545859 | orchestrator | | progress | 0 | 2025-06-11 15:27:34.545871 | orchestrator | | project_id | 12b47844c1da49d9a557996d18839fc3 | 2025-06-11 15:27:34.545884 | orchestrator | | properties | hostname='test-2' | 2025-06-11 15:27:34.545896 | orchestrator | | security_groups | name='ssh' | 2025-06-11 15:27:34.545909 | orchestrator | | | name='icmp' | 2025-06-11 15:27:34.545922 | orchestrator | | server_groups | None | 2025-06-11 15:27:34.545932 | orchestrator | | status | ACTIVE | 2025-06-11 15:27:34.545949 | orchestrator | | tags | test | 2025-06-11 15:27:34.545960 | orchestrator | | trusted_image_certificates | None | 2025-06-11 15:27:34.545972 | orchestrator | | updated | 2025-06-11T15:26:11Z | 2025-06-11 15:27:34.545988 | orchestrator | | user_id | ded7b4603f974d1094248c7dc763c613 | 2025-06-11 15:27:34.545999 | orchestrator | | volumes_attached | | 2025-06-11 15:27:34.549240 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:34.824641 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-11 15:27:37.847643 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:37.847745 | orchestrator | | Field | Value | 2025-06-11 15:27:37.847759 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:37.847769 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-11 15:27:37.847801 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-11 15:27:37.847827 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-11 15:27:37.847842 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-11 15:27:37.847853 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-11 15:27:37.847863 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-11 15:27:37.847873 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-11 15:27:37.847882 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-11 15:27:37.847909 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-11 15:27:37.847920 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-11 15:27:37.847930 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-11 15:27:37.847939 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-11 15:27:37.847956 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-11 15:27:37.847965 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-11 15:27:37.847980 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-11 15:27:37.847990 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-11T15:25:07.000000 | 2025-06-11 15:27:37.848000 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-11 15:27:37.848010 | orchestrator | | accessIPv4 | | 2025-06-11 15:27:37.848019 | orchestrator | | accessIPv6 | | 2025-06-11 15:27:37.848029 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.116 | 2025-06-11 15:27:37.848045 | orchestrator | | config_drive | | 2025-06-11 15:27:37.848054 | orchestrator | | created | 2025-06-11T15:24:51Z | 2025-06-11 15:27:37.848064 | orchestrator | | description | None | 2025-06-11 15:27:37.848080 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-11 15:27:37.848091 | orchestrator | | hostId | 1ed7f01b10be85da30ba8af1b1a90d06076feab5707837d2e224ef99 | 2025-06-11 15:27:37.848101 | orchestrator | | host_status | None | 2025-06-11 15:27:37.848115 | orchestrator | | id | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | 2025-06-11 15:27:37.848125 | orchestrator | | image | Cirros 0.6.2 (9b0a6c78-ebe7-4d89-b023-f1a4ead6e10c) | 2025-06-11 15:27:37.848135 | orchestrator | | key_name | test | 2025-06-11 15:27:37.848145 | orchestrator | | locked | False | 2025-06-11 15:27:37.848155 | orchestrator | | locked_reason | None | 2025-06-11 15:27:37.848165 | orchestrator | | name | test-3 | 2025-06-11 15:27:37.848180 | orchestrator | | pinned_availability_zone | None | 2025-06-11 15:27:37.848192 | orchestrator | | progress | 0 | 2025-06-11 15:27:37.848243 | orchestrator | | project_id | 12b47844c1da49d9a557996d18839fc3 | 2025-06-11 15:27:37.848255 | orchestrator | | properties | hostname='test-3' | 2025-06-11 15:27:37.848266 | orchestrator | | security_groups | name='ssh' | 2025-06-11 15:27:37.848277 | orchestrator | | | name='icmp' | 2025-06-11 15:27:37.848293 | orchestrator | | server_groups | None | 2025-06-11 15:27:37.848305 | orchestrator | | status | ACTIVE | 2025-06-11 15:27:37.848316 | orchestrator | | tags | test | 2025-06-11 15:27:37.848327 | orchestrator | | trusted_image_certificates | None | 2025-06-11 15:27:37.848338 | orchestrator | | updated | 2025-06-11T15:26:16Z | 2025-06-11 15:27:37.848354 | orchestrator | | user_id | ded7b4603f974d1094248c7dc763c613 | 2025-06-11 15:27:37.848371 | orchestrator | | volumes_attached | | 2025-06-11 15:27:37.849873 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:38.117335 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-11 15:27:41.339466 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:41.339587 | orchestrator | | Field | Value | 2025-06-11 15:27:41.339615 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:41.339656 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-11 15:27:41.339670 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-11 15:27:41.339682 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-11 15:27:41.339693 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-11 15:27:41.339704 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-11 15:27:41.339715 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-11 15:27:41.339746 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-11 15:27:41.339758 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-11 15:27:41.339789 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-11 15:27:41.339801 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-11 15:27:41.339812 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-11 15:27:41.339823 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-11 15:27:41.339839 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-11 15:27:41.339851 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-11 15:27:41.339862 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-11 15:27:41.339872 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-11T15:25:41.000000 | 2025-06-11 15:27:41.339883 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-11 15:27:41.339902 | orchestrator | | accessIPv4 | | 2025-06-11 15:27:41.339913 | orchestrator | | accessIPv6 | | 2025-06-11 15:27:41.339924 | orchestrator | | addresses | auto_allocated_network=10.42.0.26, 192.168.112.117 | 2025-06-11 15:27:41.339943 | orchestrator | | config_drive | | 2025-06-11 15:27:41.339954 | orchestrator | | created | 2025-06-11T15:25:24Z | 2025-06-11 15:27:41.339967 | orchestrator | | description | None | 2025-06-11 15:27:41.339980 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-11 15:27:41.339993 | orchestrator | | hostId | 4bba3610ae90c6a03b6bed69ae2108327c846f7919898fa960bcde1f | 2025-06-11 15:27:41.340006 | orchestrator | | host_status | None | 2025-06-11 15:27:41.340019 | orchestrator | | id | 07703d86-e307-4e50-a2bc-f244d5d0ffce | 2025-06-11 15:27:41.340041 | orchestrator | | image | Cirros 0.6.2 (9b0a6c78-ebe7-4d89-b023-f1a4ead6e10c) | 2025-06-11 15:27:41.340062 | orchestrator | | key_name | test | 2025-06-11 15:27:41.340075 | orchestrator | | locked | False | 2025-06-11 15:27:41.340088 | orchestrator | | locked_reason | None | 2025-06-11 15:27:41.340101 | orchestrator | | name | test-4 | 2025-06-11 15:27:41.340120 | orchestrator | | pinned_availability_zone | None | 2025-06-11 15:27:41.340133 | orchestrator | | progress | 0 | 2025-06-11 15:27:41.340145 | orchestrator | | project_id | 12b47844c1da49d9a557996d18839fc3 | 2025-06-11 15:27:41.340163 | orchestrator | | properties | hostname='test-4' | 2025-06-11 15:27:41.340175 | orchestrator | | security_groups | name='ssh' | 2025-06-11 15:27:41.340187 | orchestrator | | | name='icmp' | 2025-06-11 15:27:41.340204 | orchestrator | | server_groups | None | 2025-06-11 15:27:41.340250 | orchestrator | | status | ACTIVE | 2025-06-11 15:27:41.340267 | orchestrator | | tags | test | 2025-06-11 15:27:41.340285 | orchestrator | | trusted_image_certificates | None | 2025-06-11 15:27:41.340305 | orchestrator | | updated | 2025-06-11T15:26:21Z | 2025-06-11 15:27:41.340331 | orchestrator | | user_id | ded7b4603f974d1094248c7dc763c613 | 2025-06-11 15:27:41.340346 | orchestrator | | volumes_attached | | 2025-06-11 15:27:41.344261 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-11 15:27:41.622305 | orchestrator | + server_ping 2025-06-11 15:27:41.623849 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-11 15:27:41.624106 | orchestrator | ++ tr -d '\r' 2025-06-11 15:27:44.547268 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:27:44.547374 | orchestrator | + ping -c3 192.168.112.161 2025-06-11 15:27:44.560552 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-06-11 15:27:44.560640 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=8.69 ms 2025-06-11 15:27:45.556000 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.13 ms 2025-06-11 15:27:46.558434 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=2.31 ms 2025-06-11 15:27:46.558694 | orchestrator | 2025-06-11 15:27:46.558722 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-06-11 15:27:46.558743 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:27:46.558789 | orchestrator | rtt min/avg/max/mdev = 2.125/4.376/8.692/3.052 ms 2025-06-11 15:27:46.558824 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:27:46.558843 | orchestrator | + ping -c3 192.168.112.131 2025-06-11 15:27:46.570958 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2025-06-11 15:27:46.571053 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=8.19 ms 2025-06-11 15:27:47.567290 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.65 ms 2025-06-11 15:27:48.569120 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.23 ms 2025-06-11 15:27:48.569271 | orchestrator | 2025-06-11 15:27:48.569292 | orchestrator | --- 192.168.112.131 ping statistics --- 2025-06-11 15:27:48.569305 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:27:48.569317 | orchestrator | rtt min/avg/max/mdev = 2.228/4.356/8.187/2.714 ms 2025-06-11 15:27:48.569779 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:27:48.569874 | orchestrator | + ping -c3 192.168.112.190 2025-06-11 15:27:48.586980 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-11 15:27:48.587051 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=13.2 ms 2025-06-11 15:27:49.577566 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.27 ms 2025-06-11 15:27:50.579160 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.06 ms 2025-06-11 15:27:50.579297 | orchestrator | 2025-06-11 15:27:50.579316 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-11 15:27:50.579328 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:27:50.579338 | orchestrator | rtt min/avg/max/mdev = 2.062/5.834/13.171/5.188 ms 2025-06-11 15:27:50.579864 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:27:50.579894 | orchestrator | + ping -c3 192.168.112.117 2025-06-11 15:27:50.591863 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-06-11 15:27:50.591926 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=8.04 ms 2025-06-11 15:27:51.587947 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.53 ms 2025-06-11 15:27:52.589116 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.45 ms 2025-06-11 15:27:52.589270 | orchestrator | 2025-06-11 15:27:52.589290 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-06-11 15:27:52.589303 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:27:52.589315 | orchestrator | rtt min/avg/max/mdev = 2.449/4.339/8.036/2.614 ms 2025-06-11 15:27:52.589836 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:27:52.589930 | orchestrator | + ping -c3 192.168.112.116 2025-06-11 15:27:52.602123 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-11 15:27:52.602172 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=8.92 ms 2025-06-11 15:27:53.597625 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.87 ms 2025-06-11 15:27:54.598381 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.70 ms 2025-06-11 15:27:54.598482 | orchestrator | 2025-06-11 15:27:54.598500 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-11 15:27:54.598514 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:27:54.598526 | orchestrator | rtt min/avg/max/mdev = 1.701/4.495/8.917/3.162 ms 2025-06-11 15:27:54.598550 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-11 15:27:54.598563 | orchestrator | + compute_list 2025-06-11 15:27:54.598575 | orchestrator | + osism manage compute list testbed-node-3 2025-06-11 15:27:57.919127 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:27:57.919314 | orchestrator | | ID | Name | Status | 2025-06-11 15:27:57.919333 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:27:57.919344 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | 2025-06-11 15:27:57.919355 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:27:58.151445 | orchestrator | + osism manage compute list testbed-node-4 2025-06-11 15:28:01.196565 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:28:01.196679 | orchestrator | | ID | Name | Status | 2025-06-11 15:28:01.196695 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:28:01.196707 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | 2025-06-11 15:28:01.196718 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | 2025-06-11 15:28:01.196729 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:28:01.453951 | orchestrator | + osism manage compute list testbed-node-5 2025-06-11 15:28:04.612554 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:28:04.612667 | orchestrator | | ID | Name | Status | 2025-06-11 15:28:04.612683 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:28:04.612696 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | 2025-06-11 15:28:04.612707 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | 2025-06-11 15:28:04.612718 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:28:04.882965 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-11 15:28:07.980411 | orchestrator | 2025-06-11 15:28:07 | INFO  | Live migrating server ab82e9cc-05de-4806-a4cf-e6476b2933a9 2025-06-11 15:28:21.031275 | orchestrator | 2025-06-11 15:28:21 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:23.375389 | orchestrator | 2025-06-11 15:28:23 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:25.858830 | orchestrator | 2025-06-11 15:28:25 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:28.278597 | orchestrator | 2025-06-11 15:28:28 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:30.528412 | orchestrator | 2025-06-11 15:28:30 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:33.191211 | orchestrator | 2025-06-11 15:28:33 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:35.459799 | orchestrator | 2025-06-11 15:28:35 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:28:37.797759 | orchestrator | 2025-06-11 15:28:37 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) completed with status ACTIVE 2025-06-11 15:28:37.797879 | orchestrator | 2025-06-11 15:28:37 | INFO  | Live migrating server 2800f509-8146-4410-8a2b-b1f46d8b4157 2025-06-11 15:28:50.225794 | orchestrator | 2025-06-11 15:28:50 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:28:52.607937 | orchestrator | 2025-06-11 15:28:52 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:28:54.947391 | orchestrator | 2025-06-11 15:28:54 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:28:57.288913 | orchestrator | 2025-06-11 15:28:57 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:28:59.549367 | orchestrator | 2025-06-11 15:28:59 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:29:01.974443 | orchestrator | 2025-06-11 15:29:01 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:29:04.262917 | orchestrator | 2025-06-11 15:29:04 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:29:06.575062 | orchestrator | 2025-06-11 15:29:06 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) completed with status ACTIVE 2025-06-11 15:29:06.838195 | orchestrator | + compute_list 2025-06-11 15:29:06.838329 | orchestrator | + osism manage compute list testbed-node-3 2025-06-11 15:29:10.301811 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:29:10.301926 | orchestrator | | ID | Name | Status | 2025-06-11 15:29:10.301942 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:29:10.301954 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | 2025-06-11 15:29:10.301966 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | 2025-06-11 15:29:10.301977 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | 2025-06-11 15:29:10.301988 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:29:10.557905 | orchestrator | + osism manage compute list testbed-node-4 2025-06-11 15:29:13.059892 | orchestrator | +------+--------+----------+ 2025-06-11 15:29:13.060007 | orchestrator | | ID | Name | Status | 2025-06-11 15:29:13.060021 | orchestrator | |------+--------+----------| 2025-06-11 15:29:13.060033 | orchestrator | +------+--------+----------+ 2025-06-11 15:29:13.321264 | orchestrator | + osism manage compute list testbed-node-5 2025-06-11 15:29:16.174341 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:29:16.174456 | orchestrator | | ID | Name | Status | 2025-06-11 15:29:16.174471 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:29:16.174483 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | 2025-06-11 15:29:16.174495 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | 2025-06-11 15:29:16.174506 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:29:16.403659 | orchestrator | + server_ping 2025-06-11 15:29:16.404783 | orchestrator | ++ tr -d '\r' 2025-06-11 15:29:16.404826 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-11 15:29:19.185393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:29:19.185490 | orchestrator | + ping -c3 192.168.112.161 2025-06-11 15:29:19.195639 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-06-11 15:29:19.195674 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=6.72 ms 2025-06-11 15:29:20.193707 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.26 ms 2025-06-11 15:29:21.194881 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=2.00 ms 2025-06-11 15:29:21.194989 | orchestrator | 2025-06-11 15:29:21.195004 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-06-11 15:29:21.195016 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-11 15:29:21.195028 | orchestrator | rtt min/avg/max/mdev = 1.995/3.658/6.724/2.170 ms 2025-06-11 15:29:21.195908 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:29:21.195933 | orchestrator | + ping -c3 192.168.112.131 2025-06-11 15:29:21.209802 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2025-06-11 15:29:21.209838 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=8.92 ms 2025-06-11 15:29:22.204824 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.34 ms 2025-06-11 15:29:23.206426 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.92 ms 2025-06-11 15:29:23.206535 | orchestrator | 2025-06-11 15:29:23.206553 | orchestrator | --- 192.168.112.131 ping statistics --- 2025-06-11 15:29:23.206571 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:29:23.206591 | orchestrator | rtt min/avg/max/mdev = 1.922/4.395/8.924/3.207 ms 2025-06-11 15:29:23.206992 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:29:23.207018 | orchestrator | + ping -c3 192.168.112.190 2025-06-11 15:29:23.221360 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-11 15:29:23.221462 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=9.37 ms 2025-06-11 15:29:24.215626 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.44 ms 2025-06-11 15:29:25.216489 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-11 15:29:25.216595 | orchestrator | 2025-06-11 15:29:25.216613 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-11 15:29:25.216627 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:29:25.216638 | orchestrator | rtt min/avg/max/mdev = 1.776/4.529/9.370/3.433 ms 2025-06-11 15:29:25.217084 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:29:25.217114 | orchestrator | + ping -c3 192.168.112.117 2025-06-11 15:29:25.227034 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-06-11 15:29:25.227131 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.32 ms 2025-06-11 15:29:26.226573 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.75 ms 2025-06-11 15:29:27.227654 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.95 ms 2025-06-11 15:29:27.227768 | orchestrator | 2025-06-11 15:29:27.227786 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-06-11 15:29:27.227799 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-11 15:29:27.227811 | orchestrator | rtt min/avg/max/mdev = 1.947/3.341/5.324/1.440 ms 2025-06-11 15:29:27.228100 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:29:27.228125 | orchestrator | + ping -c3 192.168.112.116 2025-06-11 15:29:27.242166 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-11 15:29:27.242224 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=8.80 ms 2025-06-11 15:29:28.238526 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.86 ms 2025-06-11 15:29:29.239204 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.08 ms 2025-06-11 15:29:29.239339 | orchestrator | 2025-06-11 15:29:29.239357 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-11 15:29:29.239370 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:29:29.239381 | orchestrator | rtt min/avg/max/mdev = 2.076/4.577/8.800/3.002 ms 2025-06-11 15:29:29.239393 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-11 15:29:32.178758 | orchestrator | 2025-06-11 15:29:32 | INFO  | Live migrating server 07703d86-e307-4e50-a2bc-f244d5d0ffce 2025-06-11 15:29:44.655304 | orchestrator | 2025-06-11 15:29:44 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:47.087382 | orchestrator | 2025-06-11 15:29:47 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:49.446121 | orchestrator | 2025-06-11 15:29:49 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:51.724889 | orchestrator | 2025-06-11 15:29:51 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:54.036410 | orchestrator | 2025-06-11 15:29:54 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:56.312056 | orchestrator | 2025-06-11 15:29:56 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:29:58.692532 | orchestrator | 2025-06-11 15:29:58 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:30:00.979871 | orchestrator | 2025-06-11 15:30:00 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) completed with status ACTIVE 2025-06-11 15:30:00.979977 | orchestrator | 2025-06-11 15:30:00 | INFO  | Live migrating server f12f0304-acc9-49fd-b821-30c4e7682b29 2025-06-11 15:30:11.971762 | orchestrator | 2025-06-11 15:30:11 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:14.365436 | orchestrator | 2025-06-11 15:30:14 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:16.716705 | orchestrator | 2025-06-11 15:30:16 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:18.998064 | orchestrator | 2025-06-11 15:30:18 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:21.358935 | orchestrator | 2025-06-11 15:30:21 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:23.709206 | orchestrator | 2025-06-11 15:30:23 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:26.040332 | orchestrator | 2025-06-11 15:30:26 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:28.396116 | orchestrator | 2025-06-11 15:30:28 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:30.761339 | orchestrator | 2025-06-11 15:30:30 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:30:33.153043 | orchestrator | 2025-06-11 15:30:33 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) completed with status ACTIVE 2025-06-11 15:30:33.460347 | orchestrator | + compute_list 2025-06-11 15:30:33.460501 | orchestrator | + osism manage compute list testbed-node-3 2025-06-11 15:30:36.563575 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:30:36.563698 | orchestrator | | ID | Name | Status | 2025-06-11 15:30:36.563718 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:30:36.563730 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | 2025-06-11 15:30:36.563741 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | 2025-06-11 15:30:36.563752 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | 2025-06-11 15:30:36.563763 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | 2025-06-11 15:30:36.563774 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | 2025-06-11 15:30:36.563785 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:30:36.804328 | orchestrator | + osism manage compute list testbed-node-4 2025-06-11 15:30:39.352039 | orchestrator | +------+--------+----------+ 2025-06-11 15:30:39.352176 | orchestrator | | ID | Name | Status | 2025-06-11 15:30:39.352199 | orchestrator | |------+--------+----------| 2025-06-11 15:30:39.353075 | orchestrator | +------+--------+----------+ 2025-06-11 15:30:39.655394 | orchestrator | + osism manage compute list testbed-node-5 2025-06-11 15:30:42.286474 | orchestrator | +------+--------+----------+ 2025-06-11 15:30:42.286597 | orchestrator | | ID | Name | Status | 2025-06-11 15:30:42.286613 | orchestrator | |------+--------+----------| 2025-06-11 15:30:42.286625 | orchestrator | +------+--------+----------+ 2025-06-11 15:30:42.583718 | orchestrator | + server_ping 2025-06-11 15:30:42.585180 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-11 15:30:42.585607 | orchestrator | ++ tr -d '\r' 2025-06-11 15:30:45.429287 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:30:45.429471 | orchestrator | + ping -c3 192.168.112.161 2025-06-11 15:30:45.440201 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-06-11 15:30:45.440287 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=7.08 ms 2025-06-11 15:30:46.437550 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=3.01 ms 2025-06-11 15:30:47.438212 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=2.38 ms 2025-06-11 15:30:47.438311 | orchestrator | 2025-06-11 15:30:47.438327 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-06-11 15:30:47.438404 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:30:47.438418 | orchestrator | rtt min/avg/max/mdev = 2.383/4.158/7.082/2.083 ms 2025-06-11 15:30:47.438843 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:30:47.439501 | orchestrator | + ping -c3 192.168.112.131 2025-06-11 15:30:47.452914 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2025-06-11 15:30:47.452943 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=9.07 ms 2025-06-11 15:30:48.448019 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.40 ms 2025-06-11 15:30:49.449764 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.40 ms 2025-06-11 15:30:49.449862 | orchestrator | 2025-06-11 15:30:49.449877 | orchestrator | --- 192.168.112.131 ping statistics --- 2025-06-11 15:30:49.449889 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:30:49.449899 | orchestrator | rtt min/avg/max/mdev = 2.396/4.621/9.065/3.142 ms 2025-06-11 15:30:49.450190 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:30:49.450212 | orchestrator | + ping -c3 192.168.112.190 2025-06-11 15:30:49.467179 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-11 15:30:49.467281 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=11.6 ms 2025-06-11 15:30:50.458850 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.03 ms 2025-06-11 15:30:51.459971 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.88 ms 2025-06-11 15:30:51.460075 | orchestrator | 2025-06-11 15:30:51.460093 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-11 15:30:51.460107 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:30:51.460119 | orchestrator | rtt min/avg/max/mdev = 1.883/5.169/11.592/4.541 ms 2025-06-11 15:30:51.460495 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:30:51.460522 | orchestrator | + ping -c3 192.168.112.117 2025-06-11 15:30:51.472923 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-06-11 15:30:51.472990 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.80 ms 2025-06-11 15:30:52.469596 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.25 ms 2025-06-11 15:30:53.471570 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.11 ms 2025-06-11 15:30:53.471714 | orchestrator | 2025-06-11 15:30:53.471732 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-06-11 15:30:53.471753 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:30:53.471773 | orchestrator | rtt min/avg/max/mdev = 2.109/3.719/6.803/2.181 ms 2025-06-11 15:30:53.471816 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:30:53.471830 | orchestrator | + ping -c3 192.168.112.116 2025-06-11 15:30:53.487689 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-11 15:30:53.487771 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=10.7 ms 2025-06-11 15:30:54.482306 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.12 ms 2025-06-11 15:30:55.483863 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=3.27 ms 2025-06-11 15:30:55.483972 | orchestrator | 2025-06-11 15:30:55.483989 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-11 15:30:55.484003 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-11 15:30:55.484015 | orchestrator | rtt min/avg/max/mdev = 2.116/5.379/10.749/3.826 ms 2025-06-11 15:30:55.484658 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-11 15:30:58.817042 | orchestrator | 2025-06-11 15:30:58 | INFO  | Live migrating server 07703d86-e307-4e50-a2bc-f244d5d0ffce 2025-06-11 15:31:10.112282 | orchestrator | 2025-06-11 15:31:10 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:12.496596 | orchestrator | 2025-06-11 15:31:12 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:14.864444 | orchestrator | 2025-06-11 15:31:14 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:17.159265 | orchestrator | 2025-06-11 15:31:17 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:19.490672 | orchestrator | 2025-06-11 15:31:19 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:21.779486 | orchestrator | 2025-06-11 15:31:21 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:24.087041 | orchestrator | 2025-06-11 15:31:24 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:31:26.333356 | orchestrator | 2025-06-11 15:31:26 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) completed with status ACTIVE 2025-06-11 15:31:26.333480 | orchestrator | 2025-06-11 15:31:26 | INFO  | Live migrating server ab82e9cc-05de-4806-a4cf-e6476b2933a9 2025-06-11 15:31:38.108030 | orchestrator | 2025-06-11 15:31:38 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:40.466270 | orchestrator | 2025-06-11 15:31:40 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:42.825624 | orchestrator | 2025-06-11 15:31:42 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:45.151979 | orchestrator | 2025-06-11 15:31:45 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:47.487474 | orchestrator | 2025-06-11 15:31:47 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:49.787712 | orchestrator | 2025-06-11 15:31:49 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:52.076153 | orchestrator | 2025-06-11 15:31:52 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:31:54.316748 | orchestrator | 2025-06-11 15:31:54 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) completed with status ACTIVE 2025-06-11 15:31:54.316832 | orchestrator | 2025-06-11 15:31:54 | INFO  | Live migrating server 990d3884-73e3-41cf-a4d3-03f0baeecb34 2025-06-11 15:32:04.053513 | orchestrator | 2025-06-11 15:32:04 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:06.432229 | orchestrator | 2025-06-11 15:32:06 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:08.799086 | orchestrator | 2025-06-11 15:32:08 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:11.109294 | orchestrator | 2025-06-11 15:32:11 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:13.396506 | orchestrator | 2025-06-11 15:32:13 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:15.690653 | orchestrator | 2025-06-11 15:32:15 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:18.059753 | orchestrator | 2025-06-11 15:32:18 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:32:20.315384 | orchestrator | 2025-06-11 15:32:20 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) completed with status ACTIVE 2025-06-11 15:32:20.315624 | orchestrator | 2025-06-11 15:32:20 | INFO  | Live migrating server 2800f509-8146-4410-8a2b-b1f46d8b4157 2025-06-11 15:32:31.757234 | orchestrator | 2025-06-11 15:32:31 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:34.107608 | orchestrator | 2025-06-11 15:32:34 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:36.464164 | orchestrator | 2025-06-11 15:32:36 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:38.801646 | orchestrator | 2025-06-11 15:32:38 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:41.088929 | orchestrator | 2025-06-11 15:32:41 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:43.341198 | orchestrator | 2025-06-11 15:32:43 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:45.641933 | orchestrator | 2025-06-11 15:32:45 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:32:47.978603 | orchestrator | 2025-06-11 15:32:47 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) completed with status ACTIVE 2025-06-11 15:32:47.978708 | orchestrator | 2025-06-11 15:32:47 | INFO  | Live migrating server f12f0304-acc9-49fd-b821-30c4e7682b29 2025-06-11 15:33:00.102747 | orchestrator | 2025-06-11 15:33:00 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:02.498906 | orchestrator | 2025-06-11 15:33:02 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:04.779334 | orchestrator | 2025-06-11 15:33:04 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:07.081945 | orchestrator | 2025-06-11 15:33:07 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:09.416771 | orchestrator | 2025-06-11 15:33:09 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:11.938286 | orchestrator | 2025-06-11 15:33:11 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:14.236796 | orchestrator | 2025-06-11 15:33:14 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:16.524089 | orchestrator | 2025-06-11 15:33:16 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:18.893754 | orchestrator | 2025-06-11 15:33:18 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:33:21.432083 | orchestrator | 2025-06-11 15:33:21 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) completed with status ACTIVE 2025-06-11 15:33:21.705924 | orchestrator | + compute_list 2025-06-11 15:33:21.706073 | orchestrator | + osism manage compute list testbed-node-3 2025-06-11 15:33:24.203281 | orchestrator | +------+--------+----------+ 2025-06-11 15:33:24.203393 | orchestrator | | ID | Name | Status | 2025-06-11 15:33:24.203408 | orchestrator | |------+--------+----------| 2025-06-11 15:33:24.203419 | orchestrator | +------+--------+----------+ 2025-06-11 15:33:24.458502 | orchestrator | + osism manage compute list testbed-node-4 2025-06-11 15:33:27.694355 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:33:27.694480 | orchestrator | | ID | Name | Status | 2025-06-11 15:33:27.694498 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:33:27.694581 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | 2025-06-11 15:33:27.694595 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | 2025-06-11 15:33:27.694606 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | 2025-06-11 15:33:27.694617 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | 2025-06-11 15:33:27.694628 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | 2025-06-11 15:33:27.694638 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:33:27.984225 | orchestrator | + osism manage compute list testbed-node-5 2025-06-11 15:33:30.470992 | orchestrator | +------+--------+----------+ 2025-06-11 15:33:30.471110 | orchestrator | | ID | Name | Status | 2025-06-11 15:33:30.471125 | orchestrator | |------+--------+----------| 2025-06-11 15:33:30.471137 | orchestrator | +------+--------+----------+ 2025-06-11 15:33:30.699218 | orchestrator | + server_ping 2025-06-11 15:33:30.700444 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-11 15:33:30.700475 | orchestrator | ++ tr -d '\r' 2025-06-11 15:33:33.865669 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:33:33.865776 | orchestrator | + ping -c3 192.168.112.161 2025-06-11 15:33:33.878967 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-06-11 15:33:33.879045 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=9.07 ms 2025-06-11 15:33:34.873628 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-11 15:33:35.874227 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.88 ms 2025-06-11 15:33:35.874343 | orchestrator | 2025-06-11 15:33:35.874482 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-06-11 15:33:35.874498 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:33:35.874510 | orchestrator | rtt min/avg/max/mdev = 1.880/4.455/9.073/3.272 ms 2025-06-11 15:33:35.874572 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:33:35.874586 | orchestrator | + ping -c3 192.168.112.131 2025-06-11 15:33:35.888009 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2025-06-11 15:33:35.888105 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=7.20 ms 2025-06-11 15:33:36.884980 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.68 ms 2025-06-11 15:33:37.885948 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.06 ms 2025-06-11 15:33:37.886136 | orchestrator | 2025-06-11 15:33:37.886156 | orchestrator | --- 192.168.112.131 ping statistics --- 2025-06-11 15:33:37.886169 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:33:37.886181 | orchestrator | rtt min/avg/max/mdev = 2.060/3.979/7.200/2.291 ms 2025-06-11 15:33:37.886763 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:33:37.886798 | orchestrator | + ping -c3 192.168.112.190 2025-06-11 15:33:37.900926 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-11 15:33:37.900975 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=10.0 ms 2025-06-11 15:33:38.895626 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=3.01 ms 2025-06-11 15:33:39.895762 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.06 ms 2025-06-11 15:33:39.895873 | orchestrator | 2025-06-11 15:33:39.895890 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-11 15:33:39.895903 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-11 15:33:39.895915 | orchestrator | rtt min/avg/max/mdev = 2.056/5.024/10.002/3.541 ms 2025-06-11 15:33:39.896255 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:33:39.896281 | orchestrator | + ping -c3 192.168.112.117 2025-06-11 15:33:39.906425 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-06-11 15:33:39.906450 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.59 ms 2025-06-11 15:33:40.905357 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.43 ms 2025-06-11 15:33:41.907301 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.00 ms 2025-06-11 15:33:41.907404 | orchestrator | 2025-06-11 15:33:41.907419 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-06-11 15:33:41.907432 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:33:41.907443 | orchestrator | rtt min/avg/max/mdev = 2.001/3.341/5.588/1.598 ms 2025-06-11 15:33:41.907455 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:33:41.907466 | orchestrator | + ping -c3 192.168.112.116 2025-06-11 15:33:41.919674 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-11 15:33:41.919726 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.96 ms 2025-06-11 15:33:42.915643 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.52 ms 2025-06-11 15:33:43.917958 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.17 ms 2025-06-11 15:33:43.918141 | orchestrator | 2025-06-11 15:33:43.918159 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-11 15:33:43.918173 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:33:43.918185 | orchestrator | rtt min/avg/max/mdev = 2.171/4.216/7.963/2.652 ms 2025-06-11 15:33:43.919980 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-11 15:33:47.064111 | orchestrator | 2025-06-11 15:33:47 | INFO  | Live migrating server 07703d86-e307-4e50-a2bc-f244d5d0ffce 2025-06-11 15:33:56.791424 | orchestrator | 2025-06-11 15:33:56 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:33:59.168764 | orchestrator | 2025-06-11 15:33:59 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:01.546530 | orchestrator | 2025-06-11 15:34:01 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:03.819597 | orchestrator | 2025-06-11 15:34:03 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:06.099800 | orchestrator | 2025-06-11 15:34:06 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:08.653085 | orchestrator | 2025-06-11 15:34:08 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:11.150258 | orchestrator | 2025-06-11 15:34:11 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) is still in progress 2025-06-11 15:34:13.685059 | orchestrator | 2025-06-11 15:34:13 | INFO  | Live migration of 07703d86-e307-4e50-a2bc-f244d5d0ffce (test-4) completed with status ACTIVE 2025-06-11 15:34:13.685164 | orchestrator | 2025-06-11 15:34:13 | INFO  | Live migrating server ab82e9cc-05de-4806-a4cf-e6476b2933a9 2025-06-11 15:34:23.949232 | orchestrator | 2025-06-11 15:34:23 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:26.331962 | orchestrator | 2025-06-11 15:34:26 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:28.689110 | orchestrator | 2025-06-11 15:34:28 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:31.051836 | orchestrator | 2025-06-11 15:34:31 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:33.332995 | orchestrator | 2025-06-11 15:34:33 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:35.646222 | orchestrator | 2025-06-11 15:34:35 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:38.015070 | orchestrator | 2025-06-11 15:34:38 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) is still in progress 2025-06-11 15:34:40.353276 | orchestrator | 2025-06-11 15:34:40 | INFO  | Live migration of ab82e9cc-05de-4806-a4cf-e6476b2933a9 (test-3) completed with status ACTIVE 2025-06-11 15:34:40.353379 | orchestrator | 2025-06-11 15:34:40 | INFO  | Live migrating server 990d3884-73e3-41cf-a4d3-03f0baeecb34 2025-06-11 15:34:50.496314 | orchestrator | 2025-06-11 15:34:50 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:34:52.860793 | orchestrator | 2025-06-11 15:34:52 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:34:55.194344 | orchestrator | 2025-06-11 15:34:55 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:34:57.515000 | orchestrator | 2025-06-11 15:34:57 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:34:59.875309 | orchestrator | 2025-06-11 15:34:59 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:35:02.176929 | orchestrator | 2025-06-11 15:35:02 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:35:04.546799 | orchestrator | 2025-06-11 15:35:04 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) is still in progress 2025-06-11 15:35:06.896680 | orchestrator | 2025-06-11 15:35:06 | INFO  | Live migration of 990d3884-73e3-41cf-a4d3-03f0baeecb34 (test-2) completed with status ACTIVE 2025-06-11 15:35:06.896786 | orchestrator | 2025-06-11 15:35:06 | INFO  | Live migrating server 2800f509-8146-4410-8a2b-b1f46d8b4157 2025-06-11 15:35:17.124019 | orchestrator | 2025-06-11 15:35:17 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:19.492672 | orchestrator | 2025-06-11 15:35:19 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:21.848358 | orchestrator | 2025-06-11 15:35:21 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:24.154765 | orchestrator | 2025-06-11 15:35:24 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:26.478135 | orchestrator | 2025-06-11 15:35:26 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:28.760101 | orchestrator | 2025-06-11 15:35:28 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:31.126777 | orchestrator | 2025-06-11 15:35:31 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) is still in progress 2025-06-11 15:35:33.503000 | orchestrator | 2025-06-11 15:35:33 | INFO  | Live migration of 2800f509-8146-4410-8a2b-b1f46d8b4157 (test-1) completed with status ACTIVE 2025-06-11 15:35:33.503103 | orchestrator | 2025-06-11 15:35:33 | INFO  | Live migrating server f12f0304-acc9-49fd-b821-30c4e7682b29 2025-06-11 15:35:44.578682 | orchestrator | 2025-06-11 15:35:44 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:46.947619 | orchestrator | 2025-06-11 15:35:46 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:49.316153 | orchestrator | 2025-06-11 15:35:49 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:51.729384 | orchestrator | 2025-06-11 15:35:51 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:54.107922 | orchestrator | 2025-06-11 15:35:54 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:56.426840 | orchestrator | 2025-06-11 15:35:56 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:35:58.849821 | orchestrator | 2025-06-11 15:35:58 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:36:01.196273 | orchestrator | 2025-06-11 15:36:01 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) is still in progress 2025-06-11 15:36:03.497327 | orchestrator | 2025-06-11 15:36:03 | INFO  | Live migration of f12f0304-acc9-49fd-b821-30c4e7682b29 (test) completed with status ACTIVE 2025-06-11 15:36:03.736717 | orchestrator | + compute_list 2025-06-11 15:36:03.736814 | orchestrator | + osism manage compute list testbed-node-3 2025-06-11 15:36:06.216222 | orchestrator | +------+--------+----------+ 2025-06-11 15:36:06.216335 | orchestrator | | ID | Name | Status | 2025-06-11 15:36:06.216349 | orchestrator | |------+--------+----------| 2025-06-11 15:36:06.216361 | orchestrator | +------+--------+----------+ 2025-06-11 15:36:06.465342 | orchestrator | + osism manage compute list testbed-node-4 2025-06-11 15:36:08.961864 | orchestrator | +------+--------+----------+ 2025-06-11 15:36:08.961963 | orchestrator | | ID | Name | Status | 2025-06-11 15:36:08.961972 | orchestrator | |------+--------+----------| 2025-06-11 15:36:08.961978 | orchestrator | +------+--------+----------+ 2025-06-11 15:36:09.251969 | orchestrator | + osism manage compute list testbed-node-5 2025-06-11 15:36:12.422453 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:36:12.422567 | orchestrator | | ID | Name | Status | 2025-06-11 15:36:12.422583 | orchestrator | |--------------------------------------+--------+----------| 2025-06-11 15:36:12.422594 | orchestrator | | 07703d86-e307-4e50-a2bc-f244d5d0ffce | test-4 | ACTIVE | 2025-06-11 15:36:12.422605 | orchestrator | | ab82e9cc-05de-4806-a4cf-e6476b2933a9 | test-3 | ACTIVE | 2025-06-11 15:36:12.422616 | orchestrator | | 990d3884-73e3-41cf-a4d3-03f0baeecb34 | test-2 | ACTIVE | 2025-06-11 15:36:12.422627 | orchestrator | | 2800f509-8146-4410-8a2b-b1f46d8b4157 | test-1 | ACTIVE | 2025-06-11 15:36:12.422638 | orchestrator | | f12f0304-acc9-49fd-b821-30c4e7682b29 | test | ACTIVE | 2025-06-11 15:36:12.422708 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-11 15:36:12.702919 | orchestrator | + server_ping 2025-06-11 15:36:12.704197 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-11 15:36:12.704773 | orchestrator | ++ tr -d '\r' 2025-06-11 15:36:15.593643 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:36:15.593811 | orchestrator | + ping -c3 192.168.112.161 2025-06-11 15:36:15.604218 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2025-06-11 15:36:15.604250 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=8.89 ms 2025-06-11 15:36:16.600147 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=3.39 ms 2025-06-11 15:36:17.601384 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=2.37 ms 2025-06-11 15:36:17.601495 | orchestrator | 2025-06-11 15:36:17.601563 | orchestrator | --- 192.168.112.161 ping statistics --- 2025-06-11 15:36:17.601579 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:36:17.601591 | orchestrator | rtt min/avg/max/mdev = 2.365/4.883/8.892/2.865 ms 2025-06-11 15:36:17.601751 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:36:17.601770 | orchestrator | + ping -c3 192.168.112.131 2025-06-11 15:36:17.613069 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2025-06-11 15:36:17.613123 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=6.87 ms 2025-06-11 15:36:18.610272 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.59 ms 2025-06-11 15:36:19.611782 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-11 15:36:19.611915 | orchestrator | 2025-06-11 15:36:19.611939 | orchestrator | --- 192.168.112.131 ping statistics --- 2025-06-11 15:36:19.611955 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:36:19.611967 | orchestrator | rtt min/avg/max/mdev = 1.777/3.746/6.869/2.233 ms 2025-06-11 15:36:19.612294 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:36:19.612319 | orchestrator | + ping -c3 192.168.112.190 2025-06-11 15:36:19.624841 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-11 15:36:19.625017 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=8.02 ms 2025-06-11 15:36:20.620728 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.63 ms 2025-06-11 15:36:21.622464 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.64 ms 2025-06-11 15:36:21.622565 | orchestrator | 2025-06-11 15:36:21.622582 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-11 15:36:21.622596 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:36:21.622608 | orchestrator | rtt min/avg/max/mdev = 1.635/4.095/8.021/2.805 ms 2025-06-11 15:36:21.622619 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:36:21.622632 | orchestrator | + ping -c3 192.168.112.117 2025-06-11 15:36:21.634157 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-06-11 15:36:21.634244 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.94 ms 2025-06-11 15:36:22.631409 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.50 ms 2025-06-11 15:36:23.633610 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.80 ms 2025-06-11 15:36:23.633784 | orchestrator | 2025-06-11 15:36:23.633801 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-06-11 15:36:23.633815 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:36:23.633826 | orchestrator | rtt min/avg/max/mdev = 2.502/4.078/6.937/2.024 ms 2025-06-11 15:36:23.634284 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-11 15:36:23.634313 | orchestrator | + ping -c3 192.168.112.116 2025-06-11 15:36:23.646819 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-11 15:36:23.646850 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.28 ms 2025-06-11 15:36:24.644629 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=3.28 ms 2025-06-11 15:36:25.645805 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-11 15:36:25.646563 | orchestrator | 2025-06-11 15:36:25.646599 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-11 15:36:25.646614 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-11 15:36:25.646628 | orchestrator | rtt min/avg/max/mdev = 2.071/4.208/7.279/2.226 ms 2025-06-11 15:36:26.004505 | orchestrator | ok: Runtime: 0:18:17.715455 2025-06-11 15:36:26.062393 | 2025-06-11 15:36:26.062547 | TASK [Run tempest] 2025-06-11 15:36:26.606657 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:26.628485 | 2025-06-11 15:36:26.628662 | TASK [Check prometheus alert status] 2025-06-11 15:36:27.166924 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:27.169970 | 2025-06-11 15:36:27.170177 | PLAY RECAP 2025-06-11 15:36:27.170331 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-11 15:36:27.170402 | 2025-06-11 15:36:27.473000 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-11 15:36:27.474071 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-11 15:36:28.286410 | 2025-06-11 15:36:28.286587 | PLAY [Post output play] 2025-06-11 15:36:28.305105 | 2025-06-11 15:36:28.305277 | LOOP [stage-output : Register sources] 2025-06-11 15:36:28.375646 | 2025-06-11 15:36:28.375994 | TASK [stage-output : Check sudo] 2025-06-11 15:36:29.364685 | orchestrator | sudo: a password is required 2025-06-11 15:36:29.417039 | orchestrator | ok: Runtime: 0:00:00.154418 2025-06-11 15:36:29.428881 | 2025-06-11 15:36:29.429883 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-11 15:36:29.477858 | 2025-06-11 15:36:29.478176 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-11 15:36:29.550900 | orchestrator | ok 2025-06-11 15:36:29.560940 | 2025-06-11 15:36:29.561095 | LOOP [stage-output : Ensure target folders exist] 2025-06-11 15:36:30.056756 | orchestrator | ok: "docs" 2025-06-11 15:36:30.057232 | 2025-06-11 15:36:30.325303 | orchestrator | ok: "artifacts" 2025-06-11 15:36:30.588347 | orchestrator | ok: "logs" 2025-06-11 15:36:30.611523 | 2025-06-11 15:36:30.611745 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-11 15:36:30.653986 | 2025-06-11 15:36:30.654362 | TASK [stage-output : Make all log files readable] 2025-06-11 15:36:30.950754 | orchestrator | ok 2025-06-11 15:36:30.965098 | 2025-06-11 15:36:30.965356 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-11 15:36:31.003307 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:31.023083 | 2025-06-11 15:36:31.023306 | TASK [stage-output : Discover log files for compression] 2025-06-11 15:36:31.049361 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:31.066729 | 2025-06-11 15:36:31.066923 | LOOP [stage-output : Archive everything from logs] 2025-06-11 15:36:31.113783 | 2025-06-11 15:36:31.113980 | PLAY [Post cleanup play] 2025-06-11 15:36:31.123025 | 2025-06-11 15:36:31.123196 | TASK [Set cloud fact (Zuul deployment)] 2025-06-11 15:36:31.186009 | orchestrator | ok 2025-06-11 15:36:31.196019 | 2025-06-11 15:36:31.196136 | TASK [Set cloud fact (local deployment)] 2025-06-11 15:36:31.230743 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:31.247439 | 2025-06-11 15:36:31.247601 | TASK [Clean the cloud environment] 2025-06-11 15:36:31.977254 | orchestrator | 2025-06-11 15:36:31 - clean up servers 2025-06-11 15:36:32.725278 | orchestrator | 2025-06-11 15:36:32 - testbed-manager 2025-06-11 15:36:32.808973 | orchestrator | 2025-06-11 15:36:32 - testbed-node-1 2025-06-11 15:36:32.894592 | orchestrator | 2025-06-11 15:36:32 - testbed-node-2 2025-06-11 15:36:32.978218 | orchestrator | 2025-06-11 15:36:32 - testbed-node-0 2025-06-11 15:36:33.069141 | orchestrator | 2025-06-11 15:36:33 - testbed-node-4 2025-06-11 15:36:33.167097 | orchestrator | 2025-06-11 15:36:33 - testbed-node-3 2025-06-11 15:36:33.260028 | orchestrator | 2025-06-11 15:36:33 - testbed-node-5 2025-06-11 15:36:33.349936 | orchestrator | 2025-06-11 15:36:33 - clean up keypairs 2025-06-11 15:36:33.370959 | orchestrator | 2025-06-11 15:36:33 - testbed 2025-06-11 15:36:33.395790 | orchestrator | 2025-06-11 15:36:33 - wait for servers to be gone 2025-06-11 15:36:44.339554 | orchestrator | 2025-06-11 15:36:44 - clean up ports 2025-06-11 15:36:44.532975 | orchestrator | 2025-06-11 15:36:44 - 48fbcd2d-fd3e-45ab-8675-cdf8b7b6b27c 2025-06-11 15:36:44.808219 | orchestrator | 2025-06-11 15:36:44 - 5e4cb412-5143-48b5-aaaa-5a969bfbb297 2025-06-11 15:36:45.067360 | orchestrator | 2025-06-11 15:36:45 - 63011a3b-0b6c-4e0b-837f-da60af87047f 2025-06-11 15:36:45.307139 | orchestrator | 2025-06-11 15:36:45 - 7f47c4f7-62d9-4a3b-a74d-b6c43f3c5427 2025-06-11 15:36:45.527368 | orchestrator | 2025-06-11 15:36:45 - ad9d4087-a651-4b6d-a662-1242f861444c 2025-06-11 15:36:45.731609 | orchestrator | 2025-06-11 15:36:45 - ccac584e-4850-4d4f-be37-c3e3caf53f46 2025-06-11 15:36:46.102877 | orchestrator | 2025-06-11 15:36:46 - f40b00da-502b-45fb-b825-13239028b3db 2025-06-11 15:36:46.315219 | orchestrator | 2025-06-11 15:36:46 - clean up volumes 2025-06-11 15:36:46.426010 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-4-node-base 2025-06-11 15:36:46.465366 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-3-node-base 2025-06-11 15:36:46.505744 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-5-node-base 2025-06-11 15:36:46.546075 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-1-node-base 2025-06-11 15:36:46.589897 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-0-node-base 2025-06-11 15:36:46.631878 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-2-node-base 2025-06-11 15:36:46.675084 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-manager-base 2025-06-11 15:36:46.716632 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-4-node-4 2025-06-11 15:36:46.762623 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-1-node-4 2025-06-11 15:36:46.805631 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-2-node-5 2025-06-11 15:36:46.846689 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-3-node-3 2025-06-11 15:36:46.888469 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-6-node-3 2025-06-11 15:36:46.928399 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-5-node-5 2025-06-11 15:36:46.967325 | orchestrator | 2025-06-11 15:36:46 - testbed-volume-7-node-4 2025-06-11 15:36:47.007976 | orchestrator | 2025-06-11 15:36:47 - testbed-volume-0-node-3 2025-06-11 15:36:47.048077 | orchestrator | 2025-06-11 15:36:47 - testbed-volume-8-node-5 2025-06-11 15:36:47.087580 | orchestrator | 2025-06-11 15:36:47 - disconnect routers 2025-06-11 15:36:47.206215 | orchestrator | 2025-06-11 15:36:47 - testbed 2025-06-11 15:36:48.142709 | orchestrator | 2025-06-11 15:36:48 - clean up subnets 2025-06-11 15:36:48.184880 | orchestrator | 2025-06-11 15:36:48 - subnet-testbed-management 2025-06-11 15:36:48.342940 | orchestrator | 2025-06-11 15:36:48 - clean up networks 2025-06-11 15:36:48.518841 | orchestrator | 2025-06-11 15:36:48 - net-testbed-management 2025-06-11 15:36:48.804973 | orchestrator | 2025-06-11 15:36:48 - clean up security groups 2025-06-11 15:36:48.846146 | orchestrator | 2025-06-11 15:36:48 - testbed-management 2025-06-11 15:36:48.962106 | orchestrator | 2025-06-11 15:36:48 - testbed-node 2025-06-11 15:36:49.079540 | orchestrator | 2025-06-11 15:36:49 - clean up floating ips 2025-06-11 15:36:49.113126 | orchestrator | 2025-06-11 15:36:49 - 81.163.192.182 2025-06-11 15:36:49.461507 | orchestrator | 2025-06-11 15:36:49 - clean up routers 2025-06-11 15:36:49.977521 | orchestrator | 2025-06-11 15:36:49 - testbed 2025-06-11 15:36:51.315543 | orchestrator | ok: Runtime: 0:00:19.285730 2025-06-11 15:36:51.320170 | 2025-06-11 15:36:51.320346 | PLAY RECAP 2025-06-11 15:36:51.320477 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-11 15:36:51.320544 | 2025-06-11 15:36:51.483501 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-11 15:36:51.485581 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-11 15:36:52.253529 | 2025-06-11 15:36:52.253706 | PLAY [Cleanup play] 2025-06-11 15:36:52.270504 | 2025-06-11 15:36:52.270667 | TASK [Set cloud fact (Zuul deployment)] 2025-06-11 15:36:52.328750 | orchestrator | ok 2025-06-11 15:36:52.339014 | 2025-06-11 15:36:52.339191 | TASK [Set cloud fact (local deployment)] 2025-06-11 15:36:52.373938 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:52.391151 | 2025-06-11 15:36:52.391293 | TASK [Clean the cloud environment] 2025-06-11 15:36:53.531474 | orchestrator | 2025-06-11 15:36:53 - clean up servers 2025-06-11 15:36:53.987591 | orchestrator | 2025-06-11 15:36:53 - clean up keypairs 2025-06-11 15:36:54.000324 | orchestrator | 2025-06-11 15:36:54 - wait for servers to be gone 2025-06-11 15:36:54.043408 | orchestrator | 2025-06-11 15:36:54 - clean up ports 2025-06-11 15:36:54.135871 | orchestrator | 2025-06-11 15:36:54 - clean up volumes 2025-06-11 15:36:54.207368 | orchestrator | 2025-06-11 15:36:54 - disconnect routers 2025-06-11 15:36:54.230074 | orchestrator | 2025-06-11 15:36:54 - clean up subnets 2025-06-11 15:36:54.265185 | orchestrator | 2025-06-11 15:36:54 - clean up networks 2025-06-11 15:36:54.419766 | orchestrator | 2025-06-11 15:36:54 - clean up security groups 2025-06-11 15:36:54.459770 | orchestrator | 2025-06-11 15:36:54 - clean up floating ips 2025-06-11 15:36:54.484283 | orchestrator | 2025-06-11 15:36:54 - clean up routers 2025-06-11 15:36:54.931506 | orchestrator | ok: Runtime: 0:00:01.360682 2025-06-11 15:36:54.935579 | 2025-06-11 15:36:54.935725 | PLAY RECAP 2025-06-11 15:36:54.935836 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-11 15:36:54.935911 | 2025-06-11 15:36:55.077809 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-11 15:36:55.080223 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-11 15:36:55.868213 | 2025-06-11 15:36:55.868378 | PLAY [Base post-fetch] 2025-06-11 15:36:55.883705 | 2025-06-11 15:36:55.883828 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-11 15:36:55.939538 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:55.948084 | 2025-06-11 15:36:55.948246 | TASK [fetch-output : Set log path for single node] 2025-06-11 15:36:56.005131 | orchestrator | ok 2025-06-11 15:36:56.014372 | 2025-06-11 15:36:56.014509 | LOOP [fetch-output : Ensure local output dirs] 2025-06-11 15:36:56.509393 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/logs" 2025-06-11 15:36:56.800638 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/artifacts" 2025-06-11 15:36:57.100277 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e49b0e958fa6455e9528dd04eee221c9/work/docs" 2025-06-11 15:36:57.116032 | 2025-06-11 15:36:57.116303 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-11 15:36:58.110071 | orchestrator | changed: .d..t...... ./ 2025-06-11 15:36:58.110508 | orchestrator | changed: All items complete 2025-06-11 15:36:58.110625 | 2025-06-11 15:36:58.808394 | orchestrator | changed: .d..t...... ./ 2025-06-11 15:36:59.540939 | orchestrator | changed: .d..t...... ./ 2025-06-11 15:36:59.573493 | 2025-06-11 15:36:59.573657 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-11 15:36:59.610783 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:59.613504 | orchestrator | skipping: Conditional result was False 2025-06-11 15:36:59.638538 | 2025-06-11 15:36:59.638656 | PLAY RECAP 2025-06-11 15:36:59.638739 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-11 15:36:59.638782 | 2025-06-11 15:36:59.784632 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-11 15:36:59.786992 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-11 15:37:00.518792 | 2025-06-11 15:37:00.518979 | PLAY [Base post] 2025-06-11 15:37:00.533702 | 2025-06-11 15:37:00.533841 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-11 15:37:01.798288 | orchestrator | changed 2025-06-11 15:37:01.810624 | 2025-06-11 15:37:01.810785 | PLAY RECAP 2025-06-11 15:37:01.810893 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-11 15:37:01.810970 | 2025-06-11 15:37:01.940930 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-11 15:37:01.943459 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-11 15:37:02.758374 | 2025-06-11 15:37:02.758546 | PLAY [Base post-logs] 2025-06-11 15:37:02.769288 | 2025-06-11 15:37:02.769422 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-11 15:37:03.242385 | localhost | changed 2025-06-11 15:37:03.252510 | 2025-06-11 15:37:03.252649 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-11 15:37:03.288720 | localhost | ok 2025-06-11 15:37:03.293098 | 2025-06-11 15:37:03.293256 | TASK [Set zuul-log-path fact] 2025-06-11 15:37:03.309317 | localhost | ok 2025-06-11 15:37:03.320840 | 2025-06-11 15:37:03.320971 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-11 15:37:03.348366 | localhost | ok 2025-06-11 15:37:03.355648 | 2025-06-11 15:37:03.355853 | TASK [upload-logs : Create log directories] 2025-06-11 15:37:03.856388 | localhost | changed 2025-06-11 15:37:03.861591 | 2025-06-11 15:37:03.861758 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-11 15:37:04.385343 | localhost -> localhost | ok: Runtime: 0:00:00.007221 2025-06-11 15:37:04.395005 | 2025-06-11 15:37:04.395271 | TASK [upload-logs : Upload logs to log server] 2025-06-11 15:37:04.986804 | localhost | Output suppressed because no_log was given 2025-06-11 15:37:04.991238 | 2025-06-11 15:37:04.991455 | LOOP [upload-logs : Compress console log and json output] 2025-06-11 15:37:05.051205 | localhost | skipping: Conditional result was False 2025-06-11 15:37:05.056219 | localhost | skipping: Conditional result was False 2025-06-11 15:37:05.063975 | 2025-06-11 15:37:05.064276 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-11 15:37:05.113816 | localhost | skipping: Conditional result was False 2025-06-11 15:37:05.114411 | 2025-06-11 15:37:05.119661 | localhost | skipping: Conditional result was False 2025-06-11 15:37:05.131092 | 2025-06-11 15:37:05.131421 | LOOP [upload-logs : Upload console log and json output]