2025-11-11 00:00:10.020323 | Job console starting 2025-11-11 00:00:10.076138 | Updating git repos 2025-11-11 00:00:10.135402 | Cloning repos into workspace 2025-11-11 00:00:10.293608 | Restoring repo states 2025-11-11 00:00:10.313855 | Merging changes 2025-11-11 00:00:10.313871 | Checking out repos 2025-11-11 00:00:10.570117 | Preparing playbooks 2025-11-11 00:00:11.204434 | Running Ansible setup 2025-11-11 00:00:15.349088 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-11-11 00:00:16.040393 | 2025-11-11 00:00:16.040558 | PLAY [Base pre] 2025-11-11 00:00:16.056091 | 2025-11-11 00:00:16.056199 | TASK [Setup log path fact] 2025-11-11 00:00:16.085294 | orchestrator | ok 2025-11-11 00:00:16.101965 | 2025-11-11 00:00:16.102085 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-11 00:00:16.140718 | orchestrator | ok 2025-11-11 00:00:16.151870 | 2025-11-11 00:00:16.152037 | TASK [emit-job-header : Print job information] 2025-11-11 00:00:16.210068 | # Job Information 2025-11-11 00:00:16.210335 | Ansible Version: 2.16.14 2025-11-11 00:00:16.210400 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-11-11 00:00:16.210470 | Pipeline: periodic-midnight 2025-11-11 00:00:16.210514 | Executor: 521e9411259a 2025-11-11 00:00:16.210553 | Triggered by: https://github.com/osism/testbed 2025-11-11 00:00:16.210591 | Event ID: 8e6d436ed1934ec09f8329f2429b3104 2025-11-11 00:00:16.220158 | 2025-11-11 00:00:16.220277 | LOOP [emit-job-header : Print node information] 2025-11-11 00:00:16.373225 | orchestrator | ok: 2025-11-11 00:00:16.373479 | orchestrator | # Node Information 2025-11-11 00:00:16.373536 | orchestrator | Inventory Hostname: orchestrator 2025-11-11 00:00:16.373580 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-11-11 00:00:16.373618 | orchestrator | Username: zuul-testbed04 2025-11-11 00:00:16.373743 | orchestrator | Distro: Debian 12.12 2025-11-11 00:00:16.373789 | orchestrator | Provider: static-testbed 2025-11-11 00:00:16.373826 | orchestrator | Region: 2025-11-11 00:00:16.373863 | orchestrator | Label: testbed-orchestrator 2025-11-11 00:00:16.373946 | orchestrator | Product Name: OpenStack Nova 2025-11-11 00:00:16.373984 | orchestrator | Interface IP: 81.163.193.140 2025-11-11 00:00:16.390162 | 2025-11-11 00:00:16.390274 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-11-11 00:00:16.827910 | orchestrator -> localhost | changed 2025-11-11 00:00:16.838697 | 2025-11-11 00:00:16.838815 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-11-11 00:00:17.721743 | orchestrator -> localhost | changed 2025-11-11 00:00:17.732670 | 2025-11-11 00:00:17.732753 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-11-11 00:00:17.957281 | orchestrator -> localhost | ok 2025-11-11 00:00:17.965205 | 2025-11-11 00:00:17.965309 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-11-11 00:00:17.983551 | orchestrator | ok 2025-11-11 00:00:17.998694 | orchestrator | included: /var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-11-11 00:00:18.006147 | 2025-11-11 00:00:18.006225 | TASK [add-build-sshkey : Create Temp SSH key] 2025-11-11 00:00:19.083316 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-11-11 00:00:19.083520 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/4bdd915bb0514d86bdfa070d35e992a8_id_rsa 2025-11-11 00:00:19.083561 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/4bdd915bb0514d86bdfa070d35e992a8_id_rsa.pub 2025-11-11 00:00:19.083590 | orchestrator -> localhost | The key fingerprint is: 2025-11-11 00:00:19.083616 | orchestrator -> localhost | SHA256:XSWwJQyCIZ/3ZYWTngIVErp0y9k0BiVM09/XExopvg4 zuul-build-sshkey 2025-11-11 00:00:19.083641 | orchestrator -> localhost | The key's randomart image is: 2025-11-11 00:00:19.083676 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-11-11 00:00:19.083700 | orchestrator -> localhost | | . .=O==+o++ o | 2025-11-11 00:00:19.083725 | orchestrator -> localhost | | o.oo*. =* = . | 2025-11-11 00:00:19.083747 | orchestrator -> localhost | | = o.+o=+o o..| 2025-11-11 00:00:19.083768 | orchestrator -> localhost | | . = B.=+o.....| 2025-11-11 00:00:19.083789 | orchestrator -> localhost | | . + S.. .. .| 2025-11-11 00:00:19.083816 | orchestrator -> localhost | | E . | 2025-11-11 00:00:19.083839 | orchestrator -> localhost | | o | 2025-11-11 00:00:19.083862 | orchestrator -> localhost | | . | 2025-11-11 00:00:19.083897 | orchestrator -> localhost | | | 2025-11-11 00:00:19.083920 | orchestrator -> localhost | +----[SHA256]-----+ 2025-11-11 00:00:19.083973 | orchestrator -> localhost | ok: Runtime: 0:00:00.650391 2025-11-11 00:00:19.091140 | 2025-11-11 00:00:19.091231 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-11-11 00:00:19.119707 | orchestrator | ok 2025-11-11 00:00:19.128992 | orchestrator | included: /var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-11-11 00:00:19.138727 | 2025-11-11 00:00:19.138808 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-11-11 00:00:19.163443 | orchestrator | skipping: Conditional result was False 2025-11-11 00:00:19.178695 | 2025-11-11 00:00:19.178852 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-11-11 00:00:19.687323 | orchestrator | changed 2025-11-11 00:00:19.694965 | 2025-11-11 00:00:19.695063 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-11-11 00:00:19.919242 | orchestrator | ok 2025-11-11 00:00:19.932097 | 2025-11-11 00:00:19.932199 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-11-11 00:00:20.316544 | orchestrator | ok 2025-11-11 00:00:20.326401 | 2025-11-11 00:00:20.326537 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-11-11 00:00:20.728520 | orchestrator | ok 2025-11-11 00:00:20.737824 | 2025-11-11 00:00:20.737997 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-11-11 00:00:20.761730 | orchestrator | skipping: Conditional result was False 2025-11-11 00:00:20.768729 | 2025-11-11 00:00:20.768834 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-11-11 00:00:21.165699 | orchestrator -> localhost | changed 2025-11-11 00:00:21.179208 | 2025-11-11 00:00:21.179302 | TASK [add-build-sshkey : Add back temp key] 2025-11-11 00:00:21.464735 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/4bdd915bb0514d86bdfa070d35e992a8_id_rsa (zuul-build-sshkey) 2025-11-11 00:00:21.465256 | orchestrator -> localhost | ok: Runtime: 0:00:00.013826 2025-11-11 00:00:21.479315 | 2025-11-11 00:00:21.479434 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-11-11 00:00:21.872041 | orchestrator | ok 2025-11-11 00:00:21.877696 | 2025-11-11 00:00:21.877781 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-11-11 00:00:21.910546 | orchestrator | skipping: Conditional result was False 2025-11-11 00:00:21.948614 | 2025-11-11 00:00:21.948707 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-11-11 00:00:22.333732 | orchestrator | ok 2025-11-11 00:00:22.345545 | 2025-11-11 00:00:22.345643 | TASK [validate-host : Define zuul_info_dir fact] 2025-11-11 00:00:22.391276 | orchestrator | ok 2025-11-11 00:00:22.402137 | 2025-11-11 00:00:22.402262 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-11-11 00:00:22.668478 | orchestrator -> localhost | ok 2025-11-11 00:00:22.675570 | 2025-11-11 00:00:22.675658 | TASK [validate-host : Collect information about the host] 2025-11-11 00:00:23.758016 | orchestrator | ok 2025-11-11 00:00:23.774821 | 2025-11-11 00:00:23.774958 | TASK [validate-host : Sanitize hostname] 2025-11-11 00:00:23.822661 | orchestrator | ok 2025-11-11 00:00:23.827736 | 2025-11-11 00:00:23.827822 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-11-11 00:00:24.337645 | orchestrator -> localhost | changed 2025-11-11 00:00:24.352155 | 2025-11-11 00:00:24.352296 | TASK [validate-host : Collect information about zuul worker] 2025-11-11 00:00:24.764164 | orchestrator | ok 2025-11-11 00:00:24.772563 | 2025-11-11 00:00:24.772695 | TASK [validate-host : Write out all zuul information for each host] 2025-11-11 00:00:25.250580 | orchestrator -> localhost | changed 2025-11-11 00:00:25.262097 | 2025-11-11 00:00:25.262192 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-11-11 00:00:25.509328 | orchestrator | ok 2025-11-11 00:00:25.518147 | 2025-11-11 00:00:25.518255 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-11-11 00:01:09.732308 | orchestrator | changed: 2025-11-11 00:01:09.732483 | orchestrator | .d..t...... src/ 2025-11-11 00:01:09.732517 | orchestrator | .d..t...... src/github.com/ 2025-11-11 00:01:09.732543 | orchestrator | .d..t...... src/github.com/osism/ 2025-11-11 00:01:09.732565 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-11-11 00:01:09.732585 | orchestrator | RedHat.yml 2025-11-11 00:01:09.746808 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-11-11 00:01:09.746825 | orchestrator | RedHat.yml 2025-11-11 00:01:09.746911 | orchestrator | = 2.2.0"... 2025-11-11 00:01:22.517782 | orchestrator | 00:01:22.517 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-11-11 00:01:22.535306 | orchestrator | 00:01:22.535 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-11-11 00:01:23.016041 | orchestrator | 00:01:23.015 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-11-11 00:01:23.881522 | orchestrator | 00:01:23.881 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-11-11 00:01:23.940636 | orchestrator | 00:01:23.940 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-11-11 00:01:24.401093 | orchestrator | 00:01:24.400 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-11-11 00:01:24.990378 | orchestrator | 00:01:24.990 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-11-11 00:01:25.791438 | orchestrator | 00:01:25.791 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-11-11 00:01:25.791495 | orchestrator | 00:01:25.791 STDOUT terraform: Providers are signed by their developers. 2025-11-11 00:01:25.791501 | orchestrator | 00:01:25.791 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-11-11 00:01:25.791507 | orchestrator | 00:01:25.791 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-11-11 00:01:25.791662 | orchestrator | 00:01:25.791 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-11-11 00:01:25.791786 | orchestrator | 00:01:25.791 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-11-11 00:01:25.791814 | orchestrator | 00:01:25.791 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-11-11 00:01:25.791830 | orchestrator | 00:01:25.791 STDOUT terraform: you run "tofu init" in the future. 2025-11-11 00:01:25.791843 | orchestrator | 00:01:25.791 STDOUT terraform: OpenTofu has been successfully initialized! 2025-11-11 00:01:25.791855 | orchestrator | 00:01:25.791 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-11-11 00:01:25.791871 | orchestrator | 00:01:25.791 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-11-11 00:01:25.791883 | orchestrator | 00:01:25.791 STDOUT terraform: should now work. 2025-11-11 00:01:25.791963 | orchestrator | 00:01:25.791 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-11-11 00:01:25.792038 | orchestrator | 00:01:25.791 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-11-11 00:01:25.792055 | orchestrator | 00:01:25.791 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-11-11 00:01:26.122953 | orchestrator | 00:01:26.122 STDOUT terraform: Created and switched to workspace "ci"! 2025-11-11 00:01:26.123044 | orchestrator | 00:01:26.122 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-11-11 00:01:26.123088 | orchestrator | 00:01:26.122 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-11-11 00:01:26.123101 | orchestrator | 00:01:26.122 STDOUT terraform: for this configuration. 2025-11-11 00:01:26.366941 | orchestrator | 00:01:26.366 STDOUT terraform: ci.auto.tfvars 2025-11-11 00:01:26.370100 | orchestrator | 00:01:26.370 STDOUT terraform: default_custom.tf 2025-11-11 00:01:27.334084 | orchestrator | 00:01:27.332 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-11-11 00:01:27.884313 | orchestrator | 00:01:27.884 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-11-11 00:01:28.134724 | orchestrator | 00:01:28.134 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-11-11 00:01:28.134785 | orchestrator | 00:01:28.134 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-11-11 00:01:28.134793 | orchestrator | 00:01:28.134 STDOUT terraform:  + create 2025-11-11 00:01:28.134833 | orchestrator | 00:01:28.134 STDOUT terraform:  <= read (data resources) 2025-11-11 00:01:28.135013 | orchestrator | 00:01:28.134 STDOUT terraform: OpenTofu will perform the following actions: 2025-11-11 00:01:28.135100 | orchestrator | 00:01:28.134 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-11-11 00:01:28.135141 | orchestrator | 00:01:28.134 STDOUT terraform:  # (config refers to values not yet known) 2025-11-11 00:01:28.135155 | orchestrator | 00:01:28.134 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-11-11 00:01:28.135166 | orchestrator | 00:01:28.135 STDOUT terraform:  + checksum = (known after apply) 2025-11-11 00:01:28.135177 | orchestrator | 00:01:28.135 STDOUT terraform:  + created_at = (known after apply) 2025-11-11 00:01:28.135192 | orchestrator | 00:01:28.135 STDOUT terraform:  + file = (known after apply) 2025-11-11 00:01:28.135206 | orchestrator | 00:01:28.135 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.135362 | orchestrator | 00:01:28.135 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.135380 | orchestrator | 00:01:28.135 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-11 00:01:28.135395 | orchestrator | 00:01:28.135 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-11 00:01:28.135455 | orchestrator | 00:01:28.135 STDOUT terraform:  + most_recent = true 2025-11-11 00:01:28.135472 | orchestrator | 00:01:28.135 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.135551 | orchestrator | 00:01:28.135 STDOUT terraform:  + protected = (known after apply) 2025-11-11 00:01:28.135569 | orchestrator | 00:01:28.135 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.135643 | orchestrator | 00:01:28.135 STDOUT terraform:  + schema = (known after apply) 2025-11-11 00:01:28.135658 | orchestrator | 00:01:28.135 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-11 00:01:28.141033 | orchestrator | 00:01:28.135 STDOUT terraform:  + tags = (known after apply) 2025-11-11 00:01:28.141219 | orchestrator | 00:01:28.141 STDOUT terraform:  + updated_at = (known after apply) 2025-11-11 00:01:28.141634 | orchestrator | 00:01:28.141 STDOUT terraform:  } 2025-11-11 00:01:28.141750 | orchestrator | 00:01:28.141 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-11-11 00:01:28.142131 | orchestrator | 00:01:28.141 STDOUT terraform:  # (config refers to values not yet known) 2025-11-11 00:01:28.142560 | orchestrator | 00:01:28.142 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-11-11 00:01:28.143146 | orchestrator | 00:01:28.142 STDOUT terraform:  + checksum = (known after apply) 2025-11-11 00:01:28.143199 | orchestrator | 00:01:28.143 STDOUT terraform:  + created_at = (known after apply) 2025-11-11 00:01:28.143632 | orchestrator | 00:01:28.143 STDOUT terraform:  + file = (known after apply) 2025-11-11 00:01:28.144008 | orchestrator | 00:01:28.143 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.144336 | orchestrator | 00:01:28.143 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.144579 | orchestrator | 00:01:28.144 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-11 00:01:28.145211 | orchestrator | 00:01:28.144 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-11 00:01:28.146582 | orchestrator | 00:01:28.145 STDOUT terraform:  + most_recent = true 2025-11-11 00:01:28.148861 | orchestrator | 00:01:28.146 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.149025 | orchestrator | 00:01:28.148 STDOUT terraform:  + protected = (known after apply) 2025-11-11 00:01:28.149045 | orchestrator | 00:01:28.148 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.149147 | orchestrator | 00:01:28.149 STDOUT terraform:  + schema = (known after apply) 2025-11-11 00:01:28.149698 | orchestrator | 00:01:28.149 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-11 00:01:28.149746 | orchestrator | 00:01:28.149 STDOUT terraform:  + tags = (known after apply) 2025-11-11 00:01:28.151940 | orchestrator | 00:01:28.149 STDOUT terraform:  + updated_at = (known after apply) 2025-11-11 00:01:28.151968 | orchestrator | 00:01:28.151 STDOUT terraform:  } 2025-11-11 00:01:28.151997 | orchestrator | 00:01:28.151 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-11-11 00:01:28.152025 | orchestrator | 00:01:28.151 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-11-11 00:01:28.152061 | orchestrator | 00:01:28.152 STDOUT terraform:  + content = (known after apply) 2025-11-11 00:01:28.152101 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-11 00:01:28.152129 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-11 00:01:28.152173 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-11 00:01:28.152207 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-11 00:01:28.152241 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-11 00:01:28.152276 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-11 00:01:28.152304 | orchestrator | 00:01:28.152 STDOUT terraform:  + directory_permission = "0777" 2025-11-11 00:01:28.152329 | orchestrator | 00:01:28.152 STDOUT terraform:  + file_permission = "0644" 2025-11-11 00:01:28.152365 | orchestrator | 00:01:28.152 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-11-11 00:01:28.152399 | orchestrator | 00:01:28.152 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.152407 | orchestrator | 00:01:28.152 STDOUT terraform:  } 2025-11-11 00:01:28.152435 | orchestrator | 00:01:28.152 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-11-11 00:01:28.152457 | orchestrator | 00:01:28.152 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-11-11 00:01:28.152496 | orchestrator | 00:01:28.152 STDOUT terraform:  + content = (known after apply) 2025-11-11 00:01:28.152531 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-11 00:01:28.152566 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-11 00:01:28.152599 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-11 00:01:28.152633 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-11 00:01:28.152682 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-11 00:01:28.152723 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-11 00:01:28.152746 | orchestrator | 00:01:28.152 STDOUT terraform:  + directory_permission = "0777" 2025-11-11 00:01:28.152770 | orchestrator | 00:01:28.152 STDOUT terraform:  + file_permission = "0644" 2025-11-11 00:01:28.152801 | orchestrator | 00:01:28.152 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-11-11 00:01:28.152836 | orchestrator | 00:01:28.152 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.152844 | orchestrator | 00:01:28.152 STDOUT terraform:  } 2025-11-11 00:01:28.152868 | orchestrator | 00:01:28.152 STDOUT terraform:  # local_file.inventory will be created 2025-11-11 00:01:28.152891 | orchestrator | 00:01:28.152 STDOUT terraform:  + resource "local_file" "inventory" { 2025-11-11 00:01:28.152924 | orchestrator | 00:01:28.152 STDOUT terraform:  + content = (known after apply) 2025-11-11 00:01:28.152958 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-11 00:01:28.152991 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-11 00:01:28.153026 | orchestrator | 00:01:28.152 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-11 00:01:28.153060 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-11 00:01:28.153094 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-11 00:01:28.153126 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-11 00:01:28.153148 | orchestrator | 00:01:28.153 STDOUT terraform:  + directory_permission = "0777" 2025-11-11 00:01:28.153171 | orchestrator | 00:01:28.153 STDOUT terraform:  + file_permission = "0644" 2025-11-11 00:01:28.153200 | orchestrator | 00:01:28.153 STDOUT terraform:  + filename = "inventory.ci" 2025-11-11 00:01:28.153247 | orchestrator | 00:01:28.153 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.153255 | orchestrator | 00:01:28.153 STDOUT terraform:  } 2025-11-11 00:01:28.153279 | orchestrator | 00:01:28.153 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-11-11 00:01:28.153307 | orchestrator | 00:01:28.153 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-11-11 00:01:28.153339 | orchestrator | 00:01:28.153 STDOUT terraform:  + content = (sensitive value) 2025-11-11 00:01:28.153372 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-11 00:01:28.153405 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-11 00:01:28.153437 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-11 00:01:28.153472 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-11 00:01:28.153506 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-11 00:01:28.153540 | orchestrator | 00:01:28.153 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-11 00:01:28.153562 | orchestrator | 00:01:28.153 STDOUT terraform:  + directory_permission = "0700" 2025-11-11 00:01:28.153585 | orchestrator | 00:01:28.153 STDOUT terraform:  + file_permission = "0600" 2025-11-11 00:01:28.153614 | orchestrator | 00:01:28.153 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-11-11 00:01:28.153654 | orchestrator | 00:01:28.153 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.153691 | orchestrator | 00:01:28.153 STDOUT terraform:  } 2025-11-11 00:01:28.153699 | orchestrator | 00:01:28.153 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-11-11 00:01:28.153723 | orchestrator | 00:01:28.153 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-11-11 00:01:28.153744 | orchestrator | 00:01:28.153 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.153752 | orchestrator | 00:01:28.153 STDOUT terraform:  } 2025-11-11 00:01:28.153805 | orchestrator | 00:01:28.153 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-11-11 00:01:28.153849 | orchestrator | 00:01:28.153 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-11-11 00:01:28.153884 | orchestrator | 00:01:28.153 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.153907 | orchestrator | 00:01:28.153 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.153954 | orchestrator | 00:01:28.153 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.154335 | orchestrator | 00:01:28.153 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.154344 | orchestrator | 00:01:28.153 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.154349 | orchestrator | 00:01:28.154 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-11-11 00:01:28.154354 | orchestrator | 00:01:28.154 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.154359 | orchestrator | 00:01:28.154 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.154372 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.154377 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.154381 | orchestrator | 00:01:28.154 STDOUT terraform:  } 2025-11-11 00:01:28.154400 | orchestrator | 00:01:28.154 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-11-11 00:01:28.154406 | orchestrator | 00:01:28.154 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.154411 | orchestrator | 00:01:28.154 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.154415 | orchestrator | 00:01:28.154 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.154420 | orchestrator | 00:01:28.154 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.154427 | orchestrator | 00:01:28.154 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.154432 | orchestrator | 00:01:28.154 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.154437 | orchestrator | 00:01:28.154 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-11-11 00:01:28.154464 | orchestrator | 00:01:28.154 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.154502 | orchestrator | 00:01:28.154 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.154511 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.154543 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.154550 | orchestrator | 00:01:28.154 STDOUT terraform:  } 2025-11-11 00:01:28.154651 | orchestrator | 00:01:28.154 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-11-11 00:01:28.154670 | orchestrator | 00:01:28.154 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.154719 | orchestrator | 00:01:28.154 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.154727 | orchestrator | 00:01:28.154 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.154762 | orchestrator | 00:01:28.154 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.154796 | orchestrator | 00:01:28.154 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.154829 | orchestrator | 00:01:28.154 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.154871 | orchestrator | 00:01:28.154 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-11-11 00:01:28.154905 | orchestrator | 00:01:28.154 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.154924 | orchestrator | 00:01:28.154 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.154947 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.154969 | orchestrator | 00:01:28.154 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.154977 | orchestrator | 00:01:28.154 STDOUT terraform:  } 2025-11-11 00:01:28.155026 | orchestrator | 00:01:28.154 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-11-11 00:01:28.155068 | orchestrator | 00:01:28.155 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.155102 | orchestrator | 00:01:28.155 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.155125 | orchestrator | 00:01:28.155 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.155158 | orchestrator | 00:01:28.155 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.155191 | orchestrator | 00:01:28.155 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.155225 | orchestrator | 00:01:28.155 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.155267 | orchestrator | 00:01:28.155 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-11-11 00:01:28.155300 | orchestrator | 00:01:28.155 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.155317 | orchestrator | 00:01:28.155 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.155341 | orchestrator | 00:01:28.155 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.155364 | orchestrator | 00:01:28.155 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.155382 | orchestrator | 00:01:28.155 STDOUT terraform:  } 2025-11-11 00:01:28.155429 | orchestrator | 00:01:28.155 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-11-11 00:01:28.155471 | orchestrator | 00:01:28.155 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.155507 | orchestrator | 00:01:28.155 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.155526 | orchestrator | 00:01:28.155 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.155560 | orchestrator | 00:01:28.155 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.155594 | orchestrator | 00:01:28.155 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.155629 | orchestrator | 00:01:28.155 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.155700 | orchestrator | 00:01:28.155 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-11-11 00:01:28.155709 | orchestrator | 00:01:28.155 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.155732 | orchestrator | 00:01:28.155 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.155754 | orchestrator | 00:01:28.155 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.155777 | orchestrator | 00:01:28.155 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.155784 | orchestrator | 00:01:28.155 STDOUT terraform:  } 2025-11-11 00:01:28.155829 | orchestrator | 00:01:28.155 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-11-11 00:01:28.155875 | orchestrator | 00:01:28.155 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.155907 | orchestrator | 00:01:28.155 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.155928 | orchestrator | 00:01:28.155 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.155964 | orchestrator | 00:01:28.155 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.155996 | orchestrator | 00:01:28.155 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.156029 | orchestrator | 00:01:28.155 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.156073 | orchestrator | 00:01:28.156 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-11-11 00:01:28.156105 | orchestrator | 00:01:28.156 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.156124 | orchestrator | 00:01:28.156 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.156149 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.156171 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.156178 | orchestrator | 00:01:28.156 STDOUT terraform:  } 2025-11-11 00:01:28.156225 | orchestrator | 00:01:28.156 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-11-11 00:01:28.156271 | orchestrator | 00:01:28.156 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-11 00:01:28.156302 | orchestrator | 00:01:28.156 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.156324 | orchestrator | 00:01:28.156 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.156357 | orchestrator | 00:01:28.156 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.156390 | orchestrator | 00:01:28.156 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.156423 | orchestrator | 00:01:28.156 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.156464 | orchestrator | 00:01:28.156 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-11-11 00:01:28.156498 | orchestrator | 00:01:28.156 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.156517 | orchestrator | 00:01:28.156 STDOUT terraform:  + size = 80 2025-11-11 00:01:28.156539 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.156561 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.156568 | orchestrator | 00:01:28.156 STDOUT terraform:  } 2025-11-11 00:01:28.156615 | orchestrator | 00:01:28.156 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-11-11 00:01:28.156655 | orchestrator | 00:01:28.156 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.156696 | orchestrator | 00:01:28.156 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.156719 | orchestrator | 00:01:28.156 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.156752 | orchestrator | 00:01:28.156 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.156785 | orchestrator | 00:01:28.156 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.156821 | orchestrator | 00:01:28.156 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-11-11 00:01:28.156854 | orchestrator | 00:01:28.156 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.156875 | orchestrator | 00:01:28.156 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.156898 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.156921 | orchestrator | 00:01:28.156 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.156929 | orchestrator | 00:01:28.156 STDOUT terraform:  } 2025-11-11 00:01:28.156973 | orchestrator | 00:01:28.156 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-11-11 00:01:28.157014 | orchestrator | 00:01:28.156 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.157046 | orchestrator | 00:01:28.157 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.157069 | orchestrator | 00:01:28.157 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.157108 | orchestrator | 00:01:28.157 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.157139 | orchestrator | 00:01:28.157 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.157177 | orchestrator | 00:01:28.157 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-11-11 00:01:28.157209 | orchestrator | 00:01:28.157 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.157228 | orchestrator | 00:01:28.157 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.157250 | orchestrator | 00:01:28.157 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.157272 | orchestrator | 00:01:28.157 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.157279 | orchestrator | 00:01:28.157 STDOUT terraform:  } 2025-11-11 00:01:28.157323 | orchestrator | 00:01:28.157 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-11-11 00:01:28.157365 | orchestrator | 00:01:28.157 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.157406 | orchestrator | 00:01:28.157 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.157429 | orchestrator | 00:01:28.157 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.157466 | orchestrator | 00:01:28.157 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.157500 | orchestrator | 00:01:28.157 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.157537 | orchestrator | 00:01:28.157 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-11-11 00:01:28.157607 | orchestrator | 00:01:28.157 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.157627 | orchestrator | 00:01:28.157 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.157650 | orchestrator | 00:01:28.157 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.157692 | orchestrator | 00:01:28.157 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.157700 | orchestrator | 00:01:28.157 STDOUT terraform:  } 2025-11-11 00:01:28.157744 | orchestrator | 00:01:28.157 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-11-11 00:01:28.157786 | orchestrator | 00:01:28.157 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.157820 | orchestrator | 00:01:28.157 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.157844 | orchestrator | 00:01:28.157 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.157877 | orchestrator | 00:01:28.157 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.157910 | orchestrator | 00:01:28.157 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.157948 | orchestrator | 00:01:28.157 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-11-11 00:01:28.157982 | orchestrator | 00:01:28.157 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.158001 | orchestrator | 00:01:28.157 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.158072 | orchestrator | 00:01:28.157 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.158094 | orchestrator | 00:01:28.158 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.158108 | orchestrator | 00:01:28.158 STDOUT terraform:  } 2025-11-11 00:01:28.158158 | orchestrator | 00:01:28.158 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-11-11 00:01:28.158197 | orchestrator | 00:01:28.158 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.158233 | orchestrator | 00:01:28.158 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.158256 | orchestrator | 00:01:28.158 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.158290 | orchestrator | 00:01:28.158 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.158325 | orchestrator | 00:01:28.158 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.158361 | orchestrator | 00:01:28.158 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-11-11 00:01:28.159534 | orchestrator | 00:01:28.158 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.159558 | orchestrator | 00:01:28.158 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.159562 | orchestrator | 00:01:28.158 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.159566 | orchestrator | 00:01:28.158 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.159571 | orchestrator | 00:01:28.158 STDOUT terraform:  } 2025-11-11 00:01:28.159575 | orchestrator | 00:01:28.158 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-11-11 00:01:28.159579 | orchestrator | 00:01:28.158 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.159583 | orchestrator | 00:01:28.158 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.159587 | orchestrator | 00:01:28.158 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.159591 | orchestrator | 00:01:28.158 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.159595 | orchestrator | 00:01:28.158 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.159605 | orchestrator | 00:01:28.158 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-11-11 00:01:28.159609 | orchestrator | 00:01:28.158 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.159614 | orchestrator | 00:01:28.158 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.159618 | orchestrator | 00:01:28.158 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.159622 | orchestrator | 00:01:28.158 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.159626 | orchestrator | 00:01:28.158 STDOUT terraform:  } 2025-11-11 00:01:28.159632 | orchestrator | 00:01:28.158 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-11-11 00:01:28.159637 | orchestrator | 00:01:28.158 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.159641 | orchestrator | 00:01:28.158 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.159645 | orchestrator | 00:01:28.158 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.159649 | orchestrator | 00:01:28.158 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.159653 | orchestrator | 00:01:28.158 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.159657 | orchestrator | 00:01:28.158 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-11-11 00:01:28.159669 | orchestrator | 00:01:28.159 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.159673 | orchestrator | 00:01:28.159 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.159677 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.159681 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.159686 | orchestrator | 00:01:28.159 STDOUT terraform:  } 2025-11-11 00:01:28.159693 | orchestrator | 00:01:28.159 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-11-11 00:01:28.159698 | orchestrator | 00:01:28.159 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.159702 | orchestrator | 00:01:28.159 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.159706 | orchestrator | 00:01:28.159 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.159710 | orchestrator | 00:01:28.159 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.159714 | orchestrator | 00:01:28.159 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.159718 | orchestrator | 00:01:28.159 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-11-11 00:01:28.159722 | orchestrator | 00:01:28.159 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.159726 | orchestrator | 00:01:28.159 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.159730 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.159734 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.159738 | orchestrator | 00:01:28.159 STDOUT terraform:  } 2025-11-11 00:01:28.159746 | orchestrator | 00:01:28.159 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-11-11 00:01:28.159750 | orchestrator | 00:01:28.159 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-11 00:01:28.159754 | orchestrator | 00:01:28.159 STDOUT terraform:  + attachment = (known after apply) 2025-11-11 00:01:28.159758 | orchestrator | 00:01:28.159 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.159762 | orchestrator | 00:01:28.159 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.159766 | orchestrator | 00:01:28.159 STDOUT terraform:  + metadata = (known after apply) 2025-11-11 00:01:28.159772 | orchestrator | 00:01:28.159 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-11-11 00:01:28.159776 | orchestrator | 00:01:28.159 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.159781 | orchestrator | 00:01:28.159 STDOUT terraform:  + size = 20 2025-11-11 00:01:28.159785 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-11 00:01:28.159789 | orchestrator | 00:01:28.159 STDOUT terraform:  + volume_type = "ssd" 2025-11-11 00:01:28.159793 | orchestrator | 00:01:28.159 STDOUT terraform:  } 2025-11-11 00:01:28.159798 | orchestrator | 00:01:28.159 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-11-11 00:01:28.162063 | orchestrator | 00:01:28.159 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-11-11 00:01:28.162076 | orchestrator | 00:01:28.159 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.162080 | orchestrator | 00:01:28.159 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.162084 | orchestrator | 00:01:28.159 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.162087 | orchestrator | 00:01:28.159 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.162091 | orchestrator | 00:01:28.159 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.162095 | orchestrator | 00:01:28.159 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.162099 | orchestrator | 00:01:28.159 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.162103 | orchestrator | 00:01:28.159 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.162106 | orchestrator | 00:01:28.160 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-11-11 00:01:28.162110 | orchestrator | 00:01:28.160 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.162114 | orchestrator | 00:01:28.160 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.162117 | orchestrator | 00:01:28.160 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.162121 | orchestrator | 00:01:28.160 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.162125 | orchestrator | 00:01:28.160 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.162132 | orchestrator | 00:01:28.160 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.162140 | orchestrator | 00:01:28.160 STDOUT terraform:  + name = "testbed-manager" 2025-11-11 00:01:28.162144 | orchestrator | 00:01:28.160 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.162147 | orchestrator | 00:01:28.160 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.162151 | orchestrator | 00:01:28.160 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.162155 | orchestrator | 00:01:28.160 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.162159 | orchestrator | 00:01:28.160 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.162162 | orchestrator | 00:01:28.160 STDOUT terraform:  + user_data = (sensitive value) 2025-11-11 00:01:28.162166 | orchestrator | 00:01:28.160 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.162170 | orchestrator | 00:01:28.160 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.162174 | orchestrator | 00:01:28.160 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.162177 | orchestrator | 00:01:28.160 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.162181 | orchestrator | 00:01:28.160 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.162185 | orchestrator | 00:01:28.160 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.162188 | orchestrator | 00:01:28.160 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.162192 | orchestrator | 00:01:28.160 STDOUT terraform:  } 2025-11-11 00:01:28.162196 | orchestrator | 00:01:28.160 STDOUT terraform:  + network { 2025-11-11 00:01:28.162200 | orchestrator | 00:01:28.160 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.162203 | orchestrator | 00:01:28.160 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.162207 | orchestrator | 00:01:28.160 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.162211 | orchestrator | 00:01:28.160 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.162214 | orchestrator | 00:01:28.160 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.162223 | orchestrator | 00:01:28.160 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.162227 | orchestrator | 00:01:28.160 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.162231 | orchestrator | 00:01:28.160 STDOUT terraform:  } 2025-11-11 00:01:28.162235 | orchestrator | 00:01:28.160 STDOUT terraform:  } 2025-11-11 00:01:28.162239 | orchestrator | 00:01:28.160 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-11-11 00:01:28.162242 | orchestrator | 00:01:28.160 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.162246 | orchestrator | 00:01:28.160 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.162250 | orchestrator | 00:01:28.160 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.162253 | orchestrator | 00:01:28.160 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.162260 | orchestrator | 00:01:28.160 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.162264 | orchestrator | 00:01:28.160 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.162268 | orchestrator | 00:01:28.161 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.162271 | orchestrator | 00:01:28.161 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.162275 | orchestrator | 00:01:28.161 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.162279 | orchestrator | 00:01:28.161 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.162282 | orchestrator | 00:01:28.161 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.162286 | orchestrator | 00:01:28.161 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.162290 | orchestrator | 00:01:28.161 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.162293 | orchestrator | 00:01:28.161 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.162297 | orchestrator | 00:01:28.161 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.162301 | orchestrator | 00:01:28.161 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.162304 | orchestrator | 00:01:28.161 STDOUT terraform:  + name = "testbed-node-0" 2025-11-11 00:01:28.162308 | orchestrator | 00:01:28.161 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.162312 | orchestrator | 00:01:28.161 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.162318 | orchestrator | 00:01:28.161 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.162321 | orchestrator | 00:01:28.161 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.162325 | orchestrator | 00:01:28.161 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.162329 | orchestrator | 00:01:28.161 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.162333 | orchestrator | 00:01:28.161 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.162336 | orchestrator | 00:01:28.161 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.162340 | orchestrator | 00:01:28.161 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.162344 | orchestrator | 00:01:28.161 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.162348 | orchestrator | 00:01:28.161 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.162351 | orchestrator | 00:01:28.161 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.162355 | orchestrator | 00:01:28.161 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.162359 | orchestrator | 00:01:28.161 STDOUT terraform:  } 2025-11-11 00:01:28.162362 | orchestrator | 00:01:28.161 STDOUT terraform:  + network { 2025-11-11 00:01:28.162366 | orchestrator | 00:01:28.161 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.162370 | orchestrator | 00:01:28.161 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.162375 | orchestrator | 00:01:28.161 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.162384 | orchestrator | 00:01:28.161 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.162388 | orchestrator | 00:01:28.161 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.162392 | orchestrator | 00:01:28.161 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.162395 | orchestrator | 00:01:28.161 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.162399 | orchestrator | 00:01:28.161 STDOUT terraform:  } 2025-11-11 00:01:28.162403 | orchestrator | 00:01:28.161 STDOUT terraform:  } 2025-11-11 00:01:28.162406 | orchestrator | 00:01:28.161 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-11-11 00:01:28.162410 | orchestrator | 00:01:28.161 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.162414 | orchestrator | 00:01:28.161 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.166148 | orchestrator | 00:01:28.161 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.166163 | orchestrator | 00:01:28.165 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.166167 | orchestrator | 00:01:28.165 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.166171 | orchestrator | 00:01:28.166 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.166175 | orchestrator | 00:01:28.166 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.166179 | orchestrator | 00:01:28.166 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.166183 | orchestrator | 00:01:28.166 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.166189 | orchestrator | 00:01:28.166 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.166193 | orchestrator | 00:01:28.166 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.166216 | orchestrator | 00:01:28.166 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.166241 | orchestrator | 00:01:28.166 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.166283 | orchestrator | 00:01:28.166 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.166320 | orchestrator | 00:01:28.166 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.166329 | orchestrator | 00:01:28.166 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.166850 | orchestrator | 00:01:28.166 STDOUT terraform:  + name = "testbed-node-1" 2025-11-11 00:01:28.166856 | orchestrator | 00:01:28.166 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.166860 | orchestrator | 00:01:28.166 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.166863 | orchestrator | 00:01:28.166 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.166867 | orchestrator | 00:01:28.166 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.166871 | orchestrator | 00:01:28.166 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.166880 | orchestrator | 00:01:28.166 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.166885 | orchestrator | 00:01:28.166 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.166888 | orchestrator | 00:01:28.166 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.166892 | orchestrator | 00:01:28.166 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.166896 | orchestrator | 00:01:28.166 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.166900 | orchestrator | 00:01:28.166 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.166903 | orchestrator | 00:01:28.166 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.166910 | orchestrator | 00:01:28.166 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.166914 | orchestrator | 00:01:28.166 STDOUT terraform:  } 2025-11-11 00:01:28.166918 | orchestrator | 00:01:28.166 STDOUT terraform:  + network { 2025-11-11 00:01:28.166922 | orchestrator | 00:01:28.166 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.166926 | orchestrator | 00:01:28.166 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.166929 | orchestrator | 00:01:28.166 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.166933 | orchestrator | 00:01:28.166 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.166937 | orchestrator | 00:01:28.166 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.166944 | orchestrator | 00:01:28.166 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.166948 | orchestrator | 00:01:28.166 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.166952 | orchestrator | 00:01:28.166 STDOUT terraform:  } 2025-11-11 00:01:28.166956 | orchestrator | 00:01:28.166 STDOUT terraform:  } 2025-11-11 00:01:28.166959 | orchestrator | 00:01:28.166 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-11-11 00:01:28.166983 | orchestrator | 00:01:28.166 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.167017 | orchestrator | 00:01:28.166 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.167059 | orchestrator | 00:01:28.167 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.167082 | orchestrator | 00:01:28.167 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.167114 | orchestrator | 00:01:28.167 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.167135 | orchestrator | 00:01:28.167 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.167183 | orchestrator | 00:01:28.167 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.167190 | orchestrator | 00:01:28.167 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.167227 | orchestrator | 00:01:28.167 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.167254 | orchestrator | 00:01:28.167 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.167266 | orchestrator | 00:01:28.167 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.167306 | orchestrator | 00:01:28.167 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.167335 | orchestrator | 00:01:28.167 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.167362 | orchestrator | 00:01:28.167 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.167392 | orchestrator | 00:01:28.167 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.167444 | orchestrator | 00:01:28.167 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.167451 | orchestrator | 00:01:28.167 STDOUT terraform:  + name = "testbed-node-2" 2025-11-11 00:01:28.167490 | orchestrator | 00:01:28.167 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.167499 | orchestrator | 00:01:28.167 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.167528 | orchestrator | 00:01:28.167 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.167549 | orchestrator | 00:01:28.167 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.167599 | orchestrator | 00:01:28.167 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.167631 | orchestrator | 00:01:28.167 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.167637 | orchestrator | 00:01:28.167 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.167732 | orchestrator | 00:01:28.167 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.167738 | orchestrator | 00:01:28.167 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.167742 | orchestrator | 00:01:28.167 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.167747 | orchestrator | 00:01:28.167 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.167779 | orchestrator | 00:01:28.167 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.167836 | orchestrator | 00:01:28.167 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.167845 | orchestrator | 00:01:28.167 STDOUT terraform:  } 2025-11-11 00:01:28.167849 | orchestrator | 00:01:28.167 STDOUT terraform:  + network { 2025-11-11 00:01:28.167854 | orchestrator | 00:01:28.167 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.167896 | orchestrator | 00:01:28.167 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.167902 | orchestrator | 00:01:28.167 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.167933 | orchestrator | 00:01:28.167 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.167969 | orchestrator | 00:01:28.167 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.168005 | orchestrator | 00:01:28.167 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.168014 | orchestrator | 00:01:28.167 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.168073 | orchestrator | 00:01:28.168 STDOUT terraform:  } 2025-11-11 00:01:28.168081 | orchestrator | 00:01:28.168 STDOUT terraform:  } 2025-11-11 00:01:28.168090 | orchestrator | 00:01:28.168 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-11-11 00:01:28.168143 | orchestrator | 00:01:28.168 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.168176 | orchestrator | 00:01:28.168 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.168211 | orchestrator | 00:01:28.168 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.168244 | orchestrator | 00:01:28.168 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.168282 | orchestrator | 00:01:28.168 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.168301 | orchestrator | 00:01:28.168 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.168319 | orchestrator | 00:01:28.168 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.168353 | orchestrator | 00:01:28.168 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.168385 | orchestrator | 00:01:28.168 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.168411 | orchestrator | 00:01:28.168 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.168433 | orchestrator | 00:01:28.168 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.168467 | orchestrator | 00:01:28.168 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.168498 | orchestrator | 00:01:28.168 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.168531 | orchestrator | 00:01:28.168 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.168563 | orchestrator | 00:01:28.168 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.168586 | orchestrator | 00:01:28.168 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.168615 | orchestrator | 00:01:28.168 STDOUT terraform:  + name = "testbed-node-3" 2025-11-11 00:01:28.168638 | orchestrator | 00:01:28.168 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.168693 | orchestrator | 00:01:28.168 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.168720 | orchestrator | 00:01:28.168 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.168742 | orchestrator | 00:01:28.168 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.168775 | orchestrator | 00:01:28.168 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.168823 | orchestrator | 00:01:28.168 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.168843 | orchestrator | 00:01:28.168 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.168857 | orchestrator | 00:01:28.168 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.168883 | orchestrator | 00:01:28.168 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.168910 | orchestrator | 00:01:28.168 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.168940 | orchestrator | 00:01:28.168 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.168966 | orchestrator | 00:01:28.168 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.169006 | orchestrator | 00:01:28.168 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.169016 | orchestrator | 00:01:28.168 STDOUT terraform:  } 2025-11-11 00:01:28.169023 | orchestrator | 00:01:28.169 STDOUT terraform:  + network { 2025-11-11 00:01:28.169031 | orchestrator | 00:01:28.169 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.169060 | orchestrator | 00:01:28.169 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.169088 | orchestrator | 00:01:28.169 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.169118 | orchestrator | 00:01:28.169 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.169148 | orchestrator | 00:01:28.169 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.169177 | orchestrator | 00:01:28.169 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.169206 | orchestrator | 00:01:28.169 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.169211 | orchestrator | 00:01:28.169 STDOUT terraform:  } 2025-11-11 00:01:28.169225 | orchestrator | 00:01:28.169 STDOUT terraform:  } 2025-11-11 00:01:28.169265 | orchestrator | 00:01:28.169 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-11-11 00:01:28.169306 | orchestrator | 00:01:28.169 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.169339 | orchestrator | 00:01:28.169 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.169370 | orchestrator | 00:01:28.169 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.169402 | orchestrator | 00:01:28.169 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.169436 | orchestrator | 00:01:28.169 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.169458 | orchestrator | 00:01:28.169 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.169478 | orchestrator | 00:01:28.169 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.169510 | orchestrator | 00:01:28.169 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.169542 | orchestrator | 00:01:28.169 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.169570 | orchestrator | 00:01:28.169 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.169591 | orchestrator | 00:01:28.169 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.169625 | orchestrator | 00:01:28.169 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.169679 | orchestrator | 00:01:28.169 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.169700 | orchestrator | 00:01:28.169 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.169731 | orchestrator | 00:01:28.169 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.169754 | orchestrator | 00:01:28.169 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.169783 | orchestrator | 00:01:28.169 STDOUT terraform:  + name = "testbed-node-4" 2025-11-11 00:01:28.169805 | orchestrator | 00:01:28.169 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.169838 | orchestrator | 00:01:28.169 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.169869 | orchestrator | 00:01:28.169 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.169890 | orchestrator | 00:01:28.169 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.169923 | orchestrator | 00:01:28.169 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.169970 | orchestrator | 00:01:28.169 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.169988 | orchestrator | 00:01:28.169 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.170008 | orchestrator | 00:01:28.169 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.172841 | orchestrator | 00:01:28.170 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.178544 | orchestrator | 00:01:28.173 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.178884 | orchestrator | 00:01:28.176 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.178893 | orchestrator | 00:01:28.176 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.178897 | orchestrator | 00:01:28.176 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.178902 | orchestrator | 00:01:28.176 STDOUT terraform:  } 2025-11-11 00:01:28.178906 | orchestrator | 00:01:28.176 STDOUT terraform:  + network { 2025-11-11 00:01:28.178910 | orchestrator | 00:01:28.176 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.178913 | orchestrator | 00:01:28.176 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.178917 | orchestrator | 00:01:28.176 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.178921 | orchestrator | 00:01:28.176 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.178925 | orchestrator | 00:01:28.176 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.178929 | orchestrator | 00:01:28.176 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.178932 | orchestrator | 00:01:28.176 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.178936 | orchestrator | 00:01:28.176 STDOUT terraform:  } 2025-11-11 00:01:28.178940 | orchestrator | 00:01:28.176 STDOUT terraform:  } 2025-11-11 00:01:28.178944 | orchestrator | 00:01:28.176 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-11-11 00:01:28.178961 | orchestrator | 00:01:28.176 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-11 00:01:28.178965 | orchestrator | 00:01:28.176 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-11 00:01:28.178969 | orchestrator | 00:01:28.176 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-11 00:01:28.178973 | orchestrator | 00:01:28.176 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-11 00:01:28.178984 | orchestrator | 00:01:28.176 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.178988 | orchestrator | 00:01:28.176 STDOUT terraform:  + availability_zone = "nova" 2025-11-11 00:01:28.178992 | orchestrator | 00:01:28.177 STDOUT terraform:  + config_drive = true 2025-11-11 00:01:28.178996 | orchestrator | 00:01:28.177 STDOUT terraform:  + created = (known after apply) 2025-11-11 00:01:28.179000 | orchestrator | 00:01:28.177 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-11 00:01:28.179004 | orchestrator | 00:01:28.177 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-11 00:01:28.179008 | orchestrator | 00:01:28.177 STDOUT terraform:  + force_delete = false 2025-11-11 00:01:28.179012 | orchestrator | 00:01:28.177 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-11 00:01:28.179015 | orchestrator | 00:01:28.177 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179019 | orchestrator | 00:01:28.177 STDOUT terraform:  + image_id = (known after apply) 2025-11-11 00:01:28.179023 | orchestrator | 00:01:28.177 STDOUT terraform:  + image_name = (known after apply) 2025-11-11 00:01:28.179041 | orchestrator | 00:01:28.177 STDOUT terraform:  + key_pair = "testbed" 2025-11-11 00:01:28.179045 | orchestrator | 00:01:28.177 STDOUT terraform:  + name = "testbed-node-5" 2025-11-11 00:01:28.179048 | orchestrator | 00:01:28.177 STDOUT terraform:  + power_state = "active" 2025-11-11 00:01:28.179052 | orchestrator | 00:01:28.177 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179056 | orchestrator | 00:01:28.177 STDOUT terraform:  + security_groups = (known after apply) 2025-11-11 00:01:28.179060 | orchestrator | 00:01:28.177 STDOUT terraform:  + stop_before_destroy = false 2025-11-11 00:01:28.179064 | orchestrator | 00:01:28.177 STDOUT terraform:  + updated = (known after apply) 2025-11-11 00:01:28.179073 | orchestrator | 00:01:28.177 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-11 00:01:28.179078 | orchestrator | 00:01:28.177 STDOUT terraform:  + block_device { 2025-11-11 00:01:28.179081 | orchestrator | 00:01:28.177 STDOUT terraform:  + boot_index = 0 2025-11-11 00:01:28.179085 | orchestrator | 00:01:28.177 STDOUT terraform:  + delete_on_termination = false 2025-11-11 00:01:28.179089 | orchestrator | 00:01:28.177 STDOUT terraform:  + destination_type = "volume" 2025-11-11 00:01:28.179093 | orchestrator | 00:01:28.177 STDOUT terraform:  + multiattach = false 2025-11-11 00:01:28.179097 | orchestrator | 00:01:28.177 STDOUT terraform:  + source_type = "volume" 2025-11-11 00:01:28.179100 | orchestrator | 00:01:28.177 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.179118 | orchestrator | 00:01:28.177 STDOUT terraform:  } 2025-11-11 00:01:28.179122 | orchestrator | 00:01:28.177 STDOUT terraform:  + network { 2025-11-11 00:01:28.179126 | orchestrator | 00:01:28.177 STDOUT terraform:  + access_network = false 2025-11-11 00:01:28.179130 | orchestrator | 00:01:28.177 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-11 00:01:28.179133 | orchestrator | 00:01:28.177 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-11 00:01:28.179141 | orchestrator | 00:01:28.177 STDOUT terraform:  + mac = (known after apply) 2025-11-11 00:01:28.179145 | orchestrator | 00:01:28.177 STDOUT terraform:  + name = (known after apply) 2025-11-11 00:01:28.179149 | orchestrator | 00:01:28.177 STDOUT terraform:  + port = (known after apply) 2025-11-11 00:01:28.179152 | orchestrator | 00:01:28.177 STDOUT terraform:  + uuid = (known after apply) 2025-11-11 00:01:28.179156 | orchestrator | 00:01:28.177 STDOUT terraform:  } 2025-11-11 00:01:28.179160 | orchestrator | 00:01:28.177 STDOUT terraform:  } 2025-11-11 00:01:28.179164 | orchestrator | 00:01:28.177 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-11-11 00:01:28.179167 | orchestrator | 00:01:28.177 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-11-11 00:01:28.179171 | orchestrator | 00:01:28.177 STDOUT terraform:  + fingerprint = (known after apply) 2025-11-11 00:01:28.179175 | orchestrator | 00:01:28.177 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179192 | orchestrator | 00:01:28.177 STDOUT terraform:  + name = "testbed" 2025-11-11 00:01:28.179196 | orchestrator | 00:01:28.177 STDOUT terraform:  + private_key = (sensitive value) 2025-11-11 00:01:28.179200 | orchestrator | 00:01:28.177 STDOUT terraform:  + public_key = (known after apply) 2025-11-11 00:01:28.179203 | orchestrator | 00:01:28.178 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179207 | orchestrator | 00:01:28.178 STDOUT terraform:  + user_id = (known after apply) 2025-11-11 00:01:28.179211 | orchestrator | 00:01:28.178 STDOUT terraform:  } 2025-11-11 00:01:28.179215 | orchestrator | 00:01:28.178 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-11-11 00:01:28.179219 | orchestrator | 00:01:28.178 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179223 | orchestrator | 00:01:28.178 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179227 | orchestrator | 00:01:28.178 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179230 | orchestrator | 00:01:28.178 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.179234 | orchestrator | 00:01:28.178 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179238 | orchestrator | 00:01:28.178 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.179242 | orchestrator | 00:01:28.178 STDOUT terraform:  } 2025-11-11 00:01:28.179249 | orchestrator | 00:01:28.178 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-11-11 00:01:28.179278 | orchestrator | 00:01:28.178 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179284 | orchestrator | 00:01:28.178 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179288 | orchestrator | 00:01:28.179 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179291 | orchestrator | 00:01:28.179 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.179295 | orchestrator | 00:01:28.179 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179302 | orchestrator | 00:01:28.179 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.179306 | orchestrator | 00:01:28.179 STDOUT terraform:  } 2025-11-11 00:01:28.179309 | orchestrator | 00:01:28.179 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-11-11 00:01:28.179313 | orchestrator | 00:01:28.179 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179317 | orchestrator | 00:01:28.179 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179321 | orchestrator | 00:01:28.179 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179327 | orchestrator | 00:01:28.179 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.179331 | orchestrator | 00:01:28.179 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179348 | orchestrator | 00:01:28.179 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.179352 | orchestrator | 00:01:28.179 STDOUT terraform:  } 2025-11-11 00:01:28.179373 | orchestrator | 00:01:28.179 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-11-11 00:01:28.179430 | orchestrator | 00:01:28.179 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179448 | orchestrator | 00:01:28.179 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179477 | orchestrator | 00:01:28.179 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179511 | orchestrator | 00:01:28.179 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.179532 | orchestrator | 00:01:28.179 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179559 | orchestrator | 00:01:28.179 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.179565 | orchestrator | 00:01:28.179 STDOUT terraform:  } 2025-11-11 00:01:28.179617 | orchestrator | 00:01:28.179 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-11-11 00:01:28.179689 | orchestrator | 00:01:28.179 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179696 | orchestrator | 00:01:28.179 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179724 | orchestrator | 00:01:28.179 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.179764 | orchestrator | 00:01:28.179 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.179779 | orchestrator | 00:01:28.179 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.179807 | orchestrator | 00:01:28.179 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.179813 | orchestrator | 00:01:28.179 STDOUT terraform:  } 2025-11-11 00:01:28.179864 | orchestrator | 00:01:28.179 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-11-11 00:01:28.179923 | orchestrator | 00:01:28.179 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.179939 | orchestrator | 00:01:28.179 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.179965 | orchestrator | 00:01:28.179 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.180002 | orchestrator | 00:01:28.179 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.180023 | orchestrator | 00:01:28.179 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.180049 | orchestrator | 00:01:28.180 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.180055 | orchestrator | 00:01:28.180 STDOUT terraform:  } 2025-11-11 00:01:28.180106 | orchestrator | 00:01:28.180 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-11-11 00:01:28.180160 | orchestrator | 00:01:28.180 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.180181 | orchestrator | 00:01:28.180 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.180214 | orchestrator | 00:01:28.180 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.180241 | orchestrator | 00:01:28.180 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.180268 | orchestrator | 00:01:28.180 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.180295 | orchestrator | 00:01:28.180 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.180315 | orchestrator | 00:01:28.180 STDOUT terraform:  } 2025-11-11 00:01:28.180355 | orchestrator | 00:01:28.180 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-11-11 00:01:28.180403 | orchestrator | 00:01:28.180 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.180430 | orchestrator | 00:01:28.180 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.180470 | orchestrator | 00:01:28.180 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.180486 | orchestrator | 00:01:28.180 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.180513 | orchestrator | 00:01:28.180 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.180551 | orchestrator | 00:01:28.180 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.180556 | orchestrator | 00:01:28.180 STDOUT terraform:  } 2025-11-11 00:01:28.180597 | orchestrator | 00:01:28.180 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-11-11 00:01:28.180644 | orchestrator | 00:01:28.180 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-11 00:01:28.180678 | orchestrator | 00:01:28.180 STDOUT terraform:  + device = (known after apply) 2025-11-11 00:01:28.180710 | orchestrator | 00:01:28.180 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.180736 | orchestrator | 00:01:28.180 STDOUT terraform:  + instance_id = (known after apply) 2025-11-11 00:01:28.180764 | orchestrator | 00:01:28.180 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.180792 | orchestrator | 00:01:28.180 STDOUT terraform:  + volume_id = (known after apply) 2025-11-11 00:01:28.180799 | orchestrator | 00:01:28.180 STDOUT terraform:  } 2025-11-11 00:01:28.180861 | orchestrator | 00:01:28.180 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-11-11 00:01:28.180917 | orchestrator | 00:01:28.180 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-11-11 00:01:28.180947 | orchestrator | 00:01:28.180 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-11 00:01:28.180976 | orchestrator | 00:01:28.180 STDOUT terraform:  + floating_ip = (known after apply) 2025-11-11 00:01:28.181026 | orchestrator | 00:01:28.180 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.181032 | orchestrator | 00:01:28.181 STDOUT terraform:  + port_id = (known after apply) 2025-11-11 00:01:28.181057 | orchestrator | 00:01:28.181 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.181063 | orchestrator | 00:01:28.181 STDOUT terraform:  } 2025-11-11 00:01:28.181113 | orchestrator | 00:01:28.181 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-11-11 00:01:28.181168 | orchestrator | 00:01:28.181 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-11-11 00:01:28.181190 | orchestrator | 00:01:28.181 STDOUT terraform:  + address = (known after apply) 2025-11-11 00:01:28.181205 | orchestrator | 00:01:28.181 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.181230 | orchestrator | 00:01:28.181 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-11 00:01:28.181265 | orchestrator | 00:01:28.181 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.181272 | orchestrator | 00:01:28.181 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-11 00:01:28.181298 | orchestrator | 00:01:28.181 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.181321 | orchestrator | 00:01:28.181 STDOUT terraform:  + pool = "public" 2025-11-11 00:01:28.181348 | orchestrator | 00:01:28.181 STDOUT terraform:  + port_id = (known after apply) 2025-11-11 00:01:28.181365 | orchestrator | 00:01:28.181 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.181390 | orchestrator | 00:01:28.181 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.181425 | orchestrator | 00:01:28.181 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.181430 | orchestrator | 00:01:28.181 STDOUT terraform:  } 2025-11-11 00:01:28.181463 | orchestrator | 00:01:28.181 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-11-11 00:01:28.181505 | orchestrator | 00:01:28.181 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-11-11 00:01:28.181541 | orchestrator | 00:01:28.181 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.181582 | orchestrator | 00:01:28.181 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.181601 | orchestrator | 00:01:28.181 STDOUT terraform:  + availability_zone_hints = [ 2025-11-11 00:01:28.181607 | orchestrator | 00:01:28.181 STDOUT terraform:  + "nova", 2025-11-11 00:01:28.181623 | orchestrator | 00:01:28.181 STDOUT terraform:  ] 2025-11-11 00:01:28.181672 | orchestrator | 00:01:28.181 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-11 00:01:28.181734 | orchestrator | 00:01:28.181 STDOUT terraform:  + external = (known after apply) 2025-11-11 00:01:28.181742 | orchestrator | 00:01:28.181 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.181780 | orchestrator | 00:01:28.181 STDOUT terraform:  + mtu = (known after apply) 2025-11-11 00:01:28.181817 | orchestrator | 00:01:28.181 STDOUT terraform:  + name = "net-testbed-management" 2025-11-11 00:01:28.181852 | orchestrator | 00:01:28.181 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.181894 | orchestrator | 00:01:28.181 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.181926 | orchestrator | 00:01:28.181 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.181970 | orchestrator | 00:01:28.181 STDOUT terraform:  + shared = (known after apply) 2025-11-11 00:01:28.181997 | orchestrator | 00:01:28.181 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.182066 | orchestrator | 00:01:28.181 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-11-11 00:01:28.182169 | orchestrator | 00:01:28.182 STDOUT terraform:  + segments (known after apply) 2025-11-11 00:01:28.182183 | orchestrator | 00:01:28.182 STDOUT terraform:  } 2025-11-11 00:01:28.182258 | orchestrator | 00:01:28.182 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-11-11 00:01:28.182364 | orchestrator | 00:01:28.182 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-11-11 00:01:28.182459 | orchestrator | 00:01:28.182 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.182464 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.182510 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.182580 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.182618 | orchestrator | 00:01:28.182 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.182637 | orchestrator | 00:01:28.182 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.182693 | orchestrator | 00:01:28.182 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.182778 | orchestrator | 00:01:28.182 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.182821 | orchestrator | 00:01:28.182 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.182834 | orchestrator | 00:01:28.182 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.182852 | orchestrator | 00:01:28.182 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.182864 | orchestrator | 00:01:28.182 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.182903 | orchestrator | 00:01:28.182 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.182916 | orchestrator | 00:01:28.182 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.183000 | orchestrator | 00:01:28.182 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.183006 | orchestrator | 00:01:28.182 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.183053 | orchestrator | 00:01:28.182 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.183229 | orchestrator | 00:01:28.182 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.183262 | orchestrator | 00:01:28.182 STDOUT terraform:  } 2025-11-11 00:01:28.183266 | orchestrator | 00:01:28.182 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.183270 | orchestrator | 00:01:28.182 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.183274 | orchestrator | 00:01:28.182 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-11-11 00:01:28.183278 | orchestrator | 00:01:28.182 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.183281 | orchestrator | 00:01:28.182 STDOUT terraform:  } 2025-11-11 00:01:28.183285 | orchestrator | 00:01:28.182 STDOUT terraform:  } 2025-11-11 00:01:28.183289 | orchestrator | 00:01:28.182 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-11-11 00:01:28.183296 | orchestrator | 00:01:28.182 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.183300 | orchestrator | 00:01:28.182 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.183303 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.183307 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.183311 | orchestrator | 00:01:28.182 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.183315 | orchestrator | 00:01:28.182 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.183318 | orchestrator | 00:01:28.183 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.183322 | orchestrator | 00:01:28.183 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.183326 | orchestrator | 00:01:28.183 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.183330 | orchestrator | 00:01:28.183 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.183333 | orchestrator | 00:01:28.183 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.183337 | orchestrator | 00:01:28.183 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.183341 | orchestrator | 00:01:28.183 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.183345 | orchestrator | 00:01:28.183 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.183349 | orchestrator | 00:01:28.183 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.183354 | orchestrator | 00:01:28.183 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.183358 | orchestrator | 00:01:28.183 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.186053 | orchestrator | 00:01:28.183 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186071 | orchestrator | 00:01:28.183 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.186075 | orchestrator | 00:01:28.183 STDOUT terraform:  } 2025-11-11 00:01:28.186079 | orchestrator | 00:01:28.183 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186083 | orchestrator | 00:01:28.183 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.186087 | orchestrator | 00:01:28.183 STDOUT terraform:  } 2025-11-11 00:01:28.186091 | orchestrator | 00:01:28.183 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186095 | orchestrator | 00:01:28.183 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.186099 | orchestrator | 00:01:28.183 STDOUT terraform:  } 2025-11-11 00:01:28.186102 | orchestrator | 00:01:28.183 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.186106 | orchestrator | 00:01:28.183 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.186110 | orchestrator | 00:01:28.183 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-11-11 00:01:28.186114 | orchestrator | 00:01:28.183 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.186117 | orchestrator | 00:01:28.183 STDOUT terraform:  } 2025-11-11 00:01:28.186121 | orchestrator | 00:01:28.183 STDOUT terraform:  } 2025-11-11 00:01:28.186125 | orchestrator | 00:01:28.183 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-11-11 00:01:28.186129 | orchestrator | 00:01:28.183 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.186133 | orchestrator | 00:01:28.183 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.186136 | orchestrator | 00:01:28.183 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.186140 | orchestrator | 00:01:28.183 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.186144 | orchestrator | 00:01:28.183 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.186151 | orchestrator | 00:01:28.183 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.186155 | orchestrator | 00:01:28.183 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.186159 | orchestrator | 00:01:28.183 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.186163 | orchestrator | 00:01:28.183 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.186166 | orchestrator | 00:01:28.183 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.186170 | orchestrator | 00:01:28.183 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.186174 | orchestrator | 00:01:28.183 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.186177 | orchestrator | 00:01:28.184 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.186181 | orchestrator | 00:01:28.184 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.186185 | orchestrator | 00:01:28.184 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.186192 | orchestrator | 00:01:28.184 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.186195 | orchestrator | 00:01:28.184 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.186199 | orchestrator | 00:01:28.184 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186203 | orchestrator | 00:01:28.184 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.186207 | orchestrator | 00:01:28.184 STDOUT terraform:  } 2025-11-11 00:01:28.186213 | orchestrator | 00:01:28.184 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186217 | orchestrator | 00:01:28.184 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.186221 | orchestrator | 00:01:28.184 STDOUT terraform:  } 2025-11-11 00:01:28.186229 | orchestrator | 00:01:28.184 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186233 | orchestrator | 00:01:28.184 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.186237 | orchestrator | 00:01:28.184 STDOUT terraform:  } 2025-11-11 00:01:28.186241 | orchestrator | 00:01:28.184 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.186244 | orchestrator | 00:01:28.184 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.186248 | orchestrator | 00:01:28.184 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-11-11 00:01:28.186252 | orchestrator | 00:01:28.184 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.186256 | orchestrator | 00:01:28.184 STDOUT terraform:  } 2025-11-11 00:01:28.186260 | orchestrator | 00:01:28.184 STDOUT terraform:  } 2025-11-11 00:01:28.186263 | orchestrator | 00:01:28.184 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-11-11 00:01:28.186267 | orchestrator | 00:01:28.184 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.186271 | orchestrator | 00:01:28.184 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.186275 | orchestrator | 00:01:28.184 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.186279 | orchestrator | 00:01:28.184 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.186283 | orchestrator | 00:01:28.184 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.186286 | orchestrator | 00:01:28.184 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.186290 | orchestrator | 00:01:28.184 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.186294 | orchestrator | 00:01:28.184 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.186298 | orchestrator | 00:01:28.184 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.186301 | orchestrator | 00:01:28.184 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.186305 | orchestrator | 00:01:28.184 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.186309 | orchestrator | 00:01:28.184 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.186315 | orchestrator | 00:01:28.184 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.186319 | orchestrator | 00:01:28.185 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.186322 | orchestrator | 00:01:28.185 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.186326 | orchestrator | 00:01:28.185 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.186330 | orchestrator | 00:01:28.185 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.186334 | orchestrator | 00:01:28.185 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186337 | orchestrator | 00:01:28.185 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.186341 | orchestrator | 00:01:28.185 STDOUT terraform:  } 2025-11-11 00:01:28.186345 | orchestrator | 00:01:28.185 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186349 | orchestrator | 00:01:28.185 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.186352 | orchestrator | 00:01:28.185 STDOUT terraform:  } 2025-11-11 00:01:28.186356 | orchestrator | 00:01:28.185 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186360 | orchestrator | 00:01:28.185 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.186364 | orchestrator | 00:01:28.185 STDOUT terraform:  } 2025-11-11 00:01:28.186369 | orchestrator | 00:01:28.185 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.186373 | orchestrator | 00:01:28.185 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.186377 | orchestrator | 00:01:28.185 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-11-11 00:01:28.186384 | orchestrator | 00:01:28.185 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.186388 | orchestrator | 00:01:28.185 STDOUT terraform:  } 2025-11-11 00:01:28.186392 | orchestrator | 00:01:28.185 STDOUT terraform:  } 2025-11-11 00:01:28.186396 | orchestrator | 00:01:28.185 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-11-11 00:01:28.186399 | orchestrator | 00:01:28.185 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.186403 | orchestrator | 00:01:28.185 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.186407 | orchestrator | 00:01:28.185 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.186410 | orchestrator | 00:01:28.185 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.186414 | orchestrator | 00:01:28.185 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.186418 | orchestrator | 00:01:28.185 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.186422 | orchestrator | 00:01:28.185 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.186426 | orchestrator | 00:01:28.185 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.186429 | orchestrator | 00:01:28.185 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.186433 | orchestrator | 00:01:28.185 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.186442 | orchestrator | 00:01:28.185 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.186446 | orchestrator | 00:01:28.185 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.186450 | orchestrator | 00:01:28.185 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.186453 | orchestrator | 00:01:28.186 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.186457 | orchestrator | 00:01:28.186 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.186461 | orchestrator | 00:01:28.186 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.186465 | orchestrator | 00:01:28.186 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.186468 | orchestrator | 00:01:28.186 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186472 | orchestrator | 00:01:28.186 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.186476 | orchestrator | 00:01:28.186 STDOUT terraform:  } 2025-11-11 00:01:28.186480 | orchestrator | 00:01:28.186 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186483 | orchestrator | 00:01:28.186 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.186487 | orchestrator | 00:01:28.186 STDOUT terraform:  } 2025-11-11 00:01:28.186491 | orchestrator | 00:01:28.186 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.186495 | orchestrator | 00:01:28.186 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.186499 | orchestrator | 00:01:28.186 STDOUT terraform:  } 2025-11-11 00:01:28.186504 | orchestrator | 00:01:28.186 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.186508 | orchestrator | 00:01:28.186 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.186512 | orchestrator | 00:01:28.186 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-11-11 00:01:28.186516 | orchestrator | 00:01:28.186 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.186520 | orchestrator | 00:01:28.186 STDOUT terraform:  } 2025-11-11 00:01:28.186523 | orchestrator | 00:01:28.186 STDOUT terraform:  } 2025-11-11 00:01:28.186554 | orchestrator | 00:01:28.186 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-11-11 00:01:28.186624 | orchestrator | 00:01:28.186 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.186631 | orchestrator | 00:01:28.186 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.186702 | orchestrator | 00:01:28.186 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.186754 | orchestrator | 00:01:28.186 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.186788 | orchestrator | 00:01:28.186 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.186867 | orchestrator | 00:01:28.186 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.186877 | orchestrator | 00:01:28.186 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.186920 | orchestrator | 00:01:28.186 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.186976 | orchestrator | 00:01:28.186 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.187032 | orchestrator | 00:01:28.186 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.187058 | orchestrator | 00:01:28.186 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.187111 | orchestrator | 00:01:28.187 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.187149 | orchestrator | 00:01:28.187 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.187188 | orchestrator | 00:01:28.187 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.187240 | orchestrator | 00:01:28.187 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.187285 | orchestrator | 00:01:28.187 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.187336 | orchestrator | 00:01:28.187 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.187369 | orchestrator | 00:01:28.187 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.187403 | orchestrator | 00:01:28.187 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.187409 | orchestrator | 00:01:28.187 STDOUT terraform:  } 2025-11-11 00:01:28.187442 | orchestrator | 00:01:28.187 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.187485 | orchestrator | 00:01:28.187 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.187493 | orchestrator | 00:01:28.187 STDOUT terraform:  } 2025-11-11 00:01:28.187499 | orchestrator | 00:01:28.187 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.187560 | orchestrator | 00:01:28.187 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.187565 | orchestrator | 00:01:28.187 STDOUT terraform:  } 2025-11-11 00:01:28.187572 | orchestrator | 00:01:28.187 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.187576 | orchestrator | 00:01:28.187 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.187617 | orchestrator | 00:01:28.187 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-11-11 00:01:28.187653 | orchestrator | 00:01:28.187 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.187672 | orchestrator | 00:01:28.187 STDOUT terraform:  } 2025-11-11 00:01:28.187679 | orchestrator | 00:01:28.187 STDOUT terraform:  } 2025-11-11 00:01:28.187731 | orchestrator | 00:01:28.187 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-11-11 00:01:28.187796 | orchestrator | 00:01:28.187 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-11 00:01:28.187836 | orchestrator | 00:01:28.187 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.187871 | orchestrator | 00:01:28.187 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-11 00:01:28.187959 | orchestrator | 00:01:28.187 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-11 00:01:28.187971 | orchestrator | 00:01:28.187 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.187982 | orchestrator | 00:01:28.187 STDOUT terraform:  + device_id = (known after apply) 2025-11-11 00:01:28.188033 | orchestrator | 00:01:28.187 STDOUT terraform:  + device_owner = (known after apply) 2025-11-11 00:01:28.188093 | orchestrator | 00:01:28.188 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-11 00:01:28.188104 | orchestrator | 00:01:28.188 STDOUT terraform:  + dns_name = (known after apply) 2025-11-11 00:01:28.188160 | orchestrator | 00:01:28.188 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.188197 | orchestrator | 00:01:28.188 STDOUT terraform:  + mac_address = (known after apply) 2025-11-11 00:01:28.188245 | orchestrator | 00:01:28.188 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.188380 | orchestrator | 00:01:28.188 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-11 00:01:28.188389 | orchestrator | 00:01:28.188 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-11 00:01:28.188413 | orchestrator | 00:01:28.188 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.188489 | orchestrator | 00:01:28.188 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-11 00:01:28.188495 | orchestrator | 00:01:28.188 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.188516 | orchestrator | 00:01:28.188 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.188554 | orchestrator | 00:01:28.188 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-11 00:01:28.188564 | orchestrator | 00:01:28.188 STDOUT terraform:  } 2025-11-11 00:01:28.188639 | orchestrator | 00:01:28.188 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.188648 | orchestrator | 00:01:28.188 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-11 00:01:28.188652 | orchestrator | 00:01:28.188 STDOUT terraform:  } 2025-11-11 00:01:28.188657 | orchestrator | 00:01:28.188 STDOUT terraform:  + allowed_address_pairs { 2025-11-11 00:01:28.188729 | orchestrator | 00:01:28.188 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-11 00:01:28.188737 | orchestrator | 00:01:28.188 STDOUT terraform:  } 2025-11-11 00:01:28.188780 | orchestrator | 00:01:28.188 STDOUT terraform:  + binding (known after apply) 2025-11-11 00:01:28.188788 | orchestrator | 00:01:28.188 STDOUT terraform:  + fixed_ip { 2025-11-11 00:01:28.188794 | orchestrator | 00:01:28.188 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-11-11 00:01:28.188830 | orchestrator | 00:01:28.188 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.188869 | orchestrator | 00:01:28.188 STDOUT terraform:  } 2025-11-11 00:01:28.188878 | orchestrator | 00:01:28.188 STDOUT terraform:  } 2025-11-11 00:01:28.188942 | orchestrator | 00:01:28.188 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-11-11 00:01:28.188972 | orchestrator | 00:01:28.188 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-11-11 00:01:28.189009 | orchestrator | 00:01:28.188 STDOUT terraform:  + force_destroy = false 2025-11-11 00:01:28.189036 | orchestrator | 00:01:28.188 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.189149 | orchestrator | 00:01:28.189 STDOUT terraform:  + port_id = (known after apply) 2025-11-11 00:01:28.189223 | orchestrator | 00:01:28.189 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.189229 | orchestrator | 00:01:28.189 STDOUT terraform:  + router_id = (known after apply) 2025-11-11 00:01:28.189234 | orchestrator | 00:01:28.189 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-11 00:01:28.189238 | orchestrator | 00:01:28.189 STDOUT terraform:  } 2025-11-11 00:01:28.189312 | orchestrator | 00:01:28.189 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-11-11 00:01:28.189321 | orchestrator | 00:01:28.189 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-11-11 00:01:28.189412 | orchestrator | 00:01:28.189 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-11 00:01:28.189420 | orchestrator | 00:01:28.189 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.189426 | orchestrator | 00:01:28.189 STDOUT terraform:  + availability_zone_hints = [ 2025-11-11 00:01:28.189464 | orchestrator | 00:01:28.189 STDOUT terraform:  + "nova", 2025-11-11 00:01:28.189469 | orchestrator | 00:01:28.189 STDOUT terraform:  ] 2025-11-11 00:01:28.189495 | orchestrator | 00:01:28.189 STDOUT terraform:  + distributed = (known after apply) 2025-11-11 00:01:28.189539 | orchestrator | 00:01:28.189 STDOUT terraform:  + enable_snat = (known after apply) 2025-11-11 00:01:28.189620 | orchestrator | 00:01:28.189 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-11-11 00:01:28.189627 | orchestrator | 00:01:28.189 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-11-11 00:01:28.189689 | orchestrator | 00:01:28.189 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.189728 | orchestrator | 00:01:28.189 STDOUT terraform:  + name = "testbed" 2025-11-11 00:01:28.189761 | orchestrator | 00:01:28.189 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.189799 | orchestrator | 00:01:28.189 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.189863 | orchestrator | 00:01:28.189 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-11-11 00:01:28.189872 | orchestrator | 00:01:28.189 STDOUT terraform:  } 2025-11-11 00:01:28.189922 | orchestrator | 00:01:28.189 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-11-11 00:01:28.190011 | orchestrator | 00:01:28.189 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-11-11 00:01:28.190033 | orchestrator | 00:01:28.189 STDOUT terraform:  + description = "ssh" 2025-11-11 00:01:28.190039 | orchestrator | 00:01:28.189 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.190072 | orchestrator | 00:01:28.190 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.190135 | orchestrator | 00:01:28.190 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.190149 | orchestrator | 00:01:28.190 STDOUT terraform:  + port_range_max = 22 2025-11-11 00:01:28.190182 | orchestrator | 00:01:28.190 STDOUT terraform:  + port_range_min = 22 2025-11-11 00:01:28.190225 | orchestrator | 00:01:28.190 STDOUT terraform:  + protocol = "tcp" 2025-11-11 00:01:28.190232 | orchestrator | 00:01:28.190 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.190321 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.190326 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.190356 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.190416 | orchestrator | 00:01:28.190 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.190443 | orchestrator | 00:01:28.190 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.190455 | orchestrator | 00:01:28.190 STDOUT terraform:  } 2025-11-11 00:01:28.190551 | orchestrator | 00:01:28.190 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-11-11 00:01:28.190591 | orchestrator | 00:01:28.190 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-11-11 00:01:28.190677 | orchestrator | 00:01:28.190 STDOUT terraform:  + description = "wireguard" 2025-11-11 00:01:28.190685 | orchestrator | 00:01:28.190 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.190690 | orchestrator | 00:01:28.190 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.190738 | orchestrator | 00:01:28.190 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.190761 | orchestrator | 00:01:28.190 STDOUT terraform:  + port_range_max = 51820 2025-11-11 00:01:28.190803 | orchestrator | 00:01:28.190 STDOUT terraform:  + port_range_min = 51820 2025-11-11 00:01:28.190813 | orchestrator | 00:01:28.190 STDOUT terraform:  + protocol = "udp" 2025-11-11 00:01:28.190857 | orchestrator | 00:01:28.190 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.190912 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.190966 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.190973 | orchestrator | 00:01:28.190 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.191041 | orchestrator | 00:01:28.190 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.191086 | orchestrator | 00:01:28.191 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.191091 | orchestrator | 00:01:28.191 STDOUT terraform:  } 2025-11-11 00:01:28.191155 | orchestrator | 00:01:28.191 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-11-11 00:01:28.191204 | orchestrator | 00:01:28.191 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-11-11 00:01:28.191248 | orchestrator | 00:01:28.191 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.191299 | orchestrator | 00:01:28.191 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.191308 | orchestrator | 00:01:28.191 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.191361 | orchestrator | 00:01:28.191 STDOUT terraform:  + protocol = "tcp" 2025-11-11 00:01:28.191387 | orchestrator | 00:01:28.191 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.191457 | orchestrator | 00:01:28.191 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.191464 | orchestrator | 00:01:28.191 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.191522 | orchestrator | 00:01:28.191 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-11 00:01:28.191575 | orchestrator | 00:01:28.191 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.191604 | orchestrator | 00:01:28.191 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.191611 | orchestrator | 00:01:28.191 STDOUT terraform:  } 2025-11-11 00:01:28.191714 | orchestrator | 00:01:28.191 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-11-11 00:01:28.191782 | orchestrator | 00:01:28.191 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-11-11 00:01:28.191790 | orchestrator | 00:01:28.191 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.191829 | orchestrator | 00:01:28.191 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.191870 | orchestrator | 00:01:28.191 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.191913 | orchestrator | 00:01:28.191 STDOUT terraform:  + protocol = "udp" 2025-11-11 00:01:28.191959 | orchestrator | 00:01:28.191 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.192013 | orchestrator | 00:01:28.191 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.192022 | orchestrator | 00:01:28.191 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.192065 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-11 00:01:28.192107 | orchestrator | 00:01:28.192 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.192149 | orchestrator | 00:01:28.192 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.192155 | orchestrator | 00:01:28.192 STDOUT terraform:  } 2025-11-11 00:01:28.192231 | orchestrator | 00:01:28.192 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-11-11 00:01:28.192296 | orchestrator | 00:01:28.192 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-11-11 00:01:28.192324 | orchestrator | 00:01:28.192 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.192334 | orchestrator | 00:01:28.192 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.192377 | orchestrator | 00:01:28.192 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.192391 | orchestrator | 00:01:28.192 STDOUT terraform:  + protocol = "icmp" 2025-11-11 00:01:28.192442 | orchestrator | 00:01:28.192 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.192477 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.192531 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.192540 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.192577 | orchestrator | 00:01:28.192 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.192633 | orchestrator | 00:01:28.192 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.192643 | orchestrator | 00:01:28.192 STDOUT terraform:  } 2025-11-11 00:01:28.192706 | orchestrator | 00:01:28.192 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-11-11 00:01:28.192755 | orchestrator | 00:01:28.192 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-11-11 00:01:28.192803 | orchestrator | 00:01:28.192 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.192809 | orchestrator | 00:01:28.192 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.192837 | orchestrator | 00:01:28.192 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.192862 | orchestrator | 00:01:28.192 STDOUT terraform:  + protocol = "tcp" 2025-11-11 00:01:28.192923 | orchestrator | 00:01:28.192 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.192933 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.192972 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.193013 | orchestrator | 00:01:28.192 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.193055 | orchestrator | 00:01:28.192 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.193112 | orchestrator | 00:01:28.193 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.193121 | orchestrator | 00:01:28.193 STDOUT terraform:  } 2025-11-11 00:01:28.193144 | orchestrator | 00:01:28.193 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-11-11 00:01:28.193199 | orchestrator | 00:01:28.193 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-11-11 00:01:28.193228 | orchestrator | 00:01:28.193 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.193268 | orchestrator | 00:01:28.193 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.193307 | orchestrator | 00:01:28.193 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.193316 | orchestrator | 00:01:28.193 STDOUT terraform:  + protocol = "udp" 2025-11-11 00:01:28.193356 | orchestrator | 00:01:28.193 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.193398 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.193409 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.193444 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.193507 | orchestrator | 00:01:28.193 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.193515 | orchestrator | 00:01:28.193 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.193520 | orchestrator | 00:01:28.193 STDOUT terraform:  } 2025-11-11 00:01:28.193573 | orchestrator | 00:01:28.193 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-11-11 00:01:28.193615 | orchestrator | 00:01:28.193 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-11-11 00:01:28.193645 | orchestrator | 00:01:28.193 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.193702 | orchestrator | 00:01:28.193 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.193731 | orchestrator | 00:01:28.193 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.193792 | orchestrator | 00:01:28.193 STDOUT terraform:  + protocol = "icmp" 2025-11-11 00:01:28.193797 | orchestrator | 00:01:28.193 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.193847 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.193857 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.193902 | orchestrator | 00:01:28.193 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.193980 | orchestrator | 00:01:28.193 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.193990 | orchestrator | 00:01:28.193 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.193994 | orchestrator | 00:01:28.193 STDOUT terraform:  } 2025-11-11 00:01:28.194046 | orchestrator | 00:01:28.193 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-11-11 00:01:28.194105 | orchestrator | 00:01:28.194 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-11-11 00:01:28.194131 | orchestrator | 00:01:28.194 STDOUT terraform:  + description = "vrrp" 2025-11-11 00:01:28.194178 | orchestrator | 00:01:28.194 STDOUT terraform:  + direction = "ingress" 2025-11-11 00:01:28.194183 | orchestrator | 00:01:28.194 STDOUT terraform:  + ethertype = "IPv4" 2025-11-11 00:01:28.194253 | orchestrator | 00:01:28.194 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.194260 | orchestrator | 00:01:28.194 STDOUT terraform:  + protocol = "112" 2025-11-11 00:01:28.194265 | orchestrator | 00:01:28.194 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.194318 | orchestrator | 00:01:28.194 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-11 00:01:28.194368 | orchestrator | 00:01:28.194 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-11 00:01:28.194380 | orchestrator | 00:01:28.194 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-11 00:01:28.194413 | orchestrator | 00:01:28.194 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-11 00:01:28.194451 | orchestrator | 00:01:28.194 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.194458 | orchestrator | 00:01:28.194 STDOUT terraform:  } 2025-11-11 00:01:28.194532 | orchestrator | 00:01:28.194 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-11-11 00:01:28.194559 | orchestrator | 00:01:28.194 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-11-11 00:01:28.194588 | orchestrator | 00:01:28.194 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.194631 | orchestrator | 00:01:28.194 STDOUT terraform:  + description = "management security group" 2025-11-11 00:01:28.194641 | orchestrator | 00:01:28.194 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.194685 | orchestrator | 00:01:28.194 STDOUT terraform:  + name = "testbed-management" 2025-11-11 00:01:28.194726 | orchestrator | 00:01:28.194 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.194733 | orchestrator | 00:01:28.194 STDOUT terraform:  + stateful = (known after apply) 2025-11-11 00:01:28.194769 | orchestrator | 00:01:28.194 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.194776 | orchestrator | 00:01:28.194 STDOUT terraform:  } 2025-11-11 00:01:28.194830 | orchestrator | 00:01:28.194 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-11-11 00:01:28.194880 | orchestrator | 00:01:28.194 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-11-11 00:01:28.194887 | orchestrator | 00:01:28.194 STDOUT terraform:  + all_tags = ( 2025-11-11 00:01:28.194977 | orchestrator | 00:01:28.194 STDOUT terraform: known after apply) 2025-11-11 00:01:28.195002 | orchestrator | 00:01:28.194 STDOUT terraform:  + description = "node security group" 2025-11-11 00:01:28.195042 | orchestrator | 00:01:28.194 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.195072 | orchestrator | 00:01:28.195 STDOUT terraform:  + name = "testbed-node" 2025-11-11 00:01:28.195094 | orchestrator | 00:01:28.195 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.195129 | orchestrator | 00:01:28.195 STDOUT terraform:  + stateful = (known after apply) 2025-11-11 00:01:28.195174 | orchestrator | 00:01:28.195 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.195183 | orchestrator | 00:01:28.195 STDOUT terraform:  } 2025-11-11 00:01:28.195255 | orchestrator | 00:01:28.195 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-11-11 00:01:28.195262 | orchestrator | 00:01:28.195 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-11-11 00:01:28.195292 | orchestrator | 00:01:28.195 STDOUT terraform:  + all_tags = (known after apply) 2025-11-11 00:01:28.195329 | orchestrator | 00:01:28.195 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-11-11 00:01:28.195335 | orchestrator | 00:01:28.195 STDOUT terraform:  + dns_nameservers = [ 2025-11-11 00:01:28.195346 | orchestrator | 00:01:28.195 STDOUT terraform:  + "8.8.8.8", 2025-11-11 00:01:28.195364 | orchestrator | 00:01:28.195 STDOUT terraform:  + "9.9.9.9", 2025-11-11 00:01:28.195370 | orchestrator | 00:01:28.195 STDOUT terraform:  ] 2025-11-11 00:01:28.195403 | orchestrator | 00:01:28.195 STDOUT terraform:  + enable_dhcp = true 2025-11-11 00:01:28.195432 | orchestrator | 00:01:28.195 STDOUT terraform:  + gateway_ip = (known after apply) 2025-11-11 00:01:28.195498 | orchestrator | 00:01:28.195 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.195503 | orchestrator | 00:01:28.195 STDOUT terraform:  + ip_version = 4 2025-11-11 00:01:28.195508 | orchestrator | 00:01:28.195 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-11-11 00:01:28.195549 | orchestrator | 00:01:28.195 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-11-11 00:01:28.195604 | orchestrator | 00:01:28.195 STDOUT terraform:  + name = "subnet-testbed-management" 2025-11-11 00:01:28.195611 | orchestrator | 00:01:28.195 STDOUT terraform:  + network_id = (known after apply) 2025-11-11 00:01:28.195677 | orchestrator | 00:01:28.195 STDOUT terraform:  + no_gateway = false 2025-11-11 00:01:28.195684 | orchestrator | 00:01:28.195 STDOUT terraform:  + region = (known after apply) 2025-11-11 00:01:28.195734 | orchestrator | 00:01:28.195 STDOUT terraform:  + service_types = (known after apply) 2025-11-11 00:01:28.195785 | orchestrator | 00:01:28.195 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-11 00:01:28.195790 | orchestrator | 00:01:28.195 STDOUT terraform:  + allocation_pool { 2025-11-11 00:01:28.195797 | orchestrator | 00:01:28.195 STDOUT terraform:  + end = "192.168.31.250" 2025-11-11 00:01:28.195839 | orchestrator | 00:01:28.195 STDOUT terraform:  + start = "192.168.31.200" 2025-11-11 00:01:28.195848 | orchestrator | 00:01:28.195 STDOUT terraform:  } 2025-11-11 00:01:28.195852 | orchestrator | 00:01:28.195 STDOUT terraform:  } 2025-11-11 00:01:28.195858 | orchestrator | 00:01:28.195 STDOUT terraform:  # terraform_data.image will be created 2025-11-11 00:01:28.195900 | orchestrator | 00:01:28.195 STDOUT terraform:  + resource "terraform_data" "image" { 2025-11-11 00:01:28.195906 | orchestrator | 00:01:28.195 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.195911 | orchestrator | 00:01:28.195 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-11 00:01:28.195966 | orchestrator | 00:01:28.195 STDOUT terraform:  + output = (known after apply) 2025-11-11 00:01:28.195972 | orchestrator | 00:01:28.195 STDOUT terraform:  } 2025-11-11 00:01:28.196006 | orchestrator | 00:01:28.195 STDOUT terraform:  # terraform_data.image_node will be created 2025-11-11 00:01:28.196037 | orchestrator | 00:01:28.195 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-11-11 00:01:28.196047 | orchestrator | 00:01:28.196 STDOUT terraform:  + id = (known after apply) 2025-11-11 00:01:28.196141 | orchestrator | 00:01:28.196 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-11 00:01:28.196149 | orchestrator | 00:01:28.196 STDOUT terraform:  + output = (known after apply) 2025-11-11 00:01:28.196153 | orchestrator | 00:01:28.196 STDOUT terraform:  } 2025-11-11 00:01:28.196161 | orchestrator | 00:01:28.196 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-11-11 00:01:28.196165 | orchestrator | 00:01:28.196 STDOUT terraform: Changes to Outputs: 2025-11-11 00:01:28.196171 | orchestrator | 00:01:28.196 STDOUT terraform:  + manager_address = (sensitive value) 2025-11-11 00:01:28.196193 | orchestrator | 00:01:28.196 STDOUT terraform:  + private_key = (sensitive value) 2025-11-11 00:01:28.362287 | orchestrator | 00:01:28.361 STDOUT terraform: terraform_data.image_node: Creating... 2025-11-11 00:01:28.362617 | orchestrator | 00:01:28.362 STDOUT terraform: terraform_data.image: Creating... 2025-11-11 00:01:28.363355 | orchestrator | 00:01:28.362 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=71cc63ee-8dcd-1339-76e3-9d3c02e3032f] 2025-11-11 00:01:28.363998 | orchestrator | 00:01:28.363 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=944a45b5-e0a2-802b-523a-881e26518e81] 2025-11-11 00:01:28.382224 | orchestrator | 00:01:28.382 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-11-11 00:01:28.384193 | orchestrator | 00:01:28.383 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-11-11 00:01:28.385533 | orchestrator | 00:01:28.385 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-11-11 00:01:28.407266 | orchestrator | 00:01:28.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-11-11 00:01:28.407313 | orchestrator | 00:01:28.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-11-11 00:01:28.407318 | orchestrator | 00:01:28.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-11-11 00:01:28.407322 | orchestrator | 00:01:28.406 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-11-11 00:01:28.407326 | orchestrator | 00:01:28.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-11-11 00:01:28.407851 | orchestrator | 00:01:28.407 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-11-11 00:01:28.409193 | orchestrator | 00:01:28.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-11-11 00:01:28.876358 | orchestrator | 00:01:28.876 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-11-11 00:01:28.877498 | orchestrator | 00:01:28.877 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-11 00:01:28.878832 | orchestrator | 00:01:28.878 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-11 00:01:28.881002 | orchestrator | 00:01:28.880 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-11-11 00:01:28.881856 | orchestrator | 00:01:28.881 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-11-11 00:01:28.883780 | orchestrator | 00:01:28.883 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-11-11 00:01:29.405844 | orchestrator | 00:01:29.405 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=6192a4f0-8503-49d5-8368-f102f30b4280] 2025-11-11 00:01:29.414749 | orchestrator | 00:01:29.413 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-11-11 00:01:30.057442 | orchestrator | 00:01:30.057 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=18e1da1bf1e3db4ed6d8a86ae2fbdaff4f1963ac] 2025-11-11 00:01:30.062403 | orchestrator | 00:01:30.062 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-11-11 00:01:30.075331 | orchestrator | 00:01:30.075 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=08aa9c1a7f584e603b05f5de54424c4a50274d90] 2025-11-11 00:01:30.080364 | orchestrator | 00:01:30.080 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-11-11 00:01:31.980632 | orchestrator | 00:01:31.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=89b8de45-7543-4421-bfde-713d4c35668f] 2025-11-11 00:01:31.989428 | orchestrator | 00:01:31.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-11-11 00:01:31.997568 | orchestrator | 00:01:31.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=e779f17b-a915-42a5-9da7-11da2e062a34] 2025-11-11 00:01:32.001613 | orchestrator | 00:01:32.001 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-11-11 00:01:32.013966 | orchestrator | 00:01:32.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=9b408528-4a47-4f88-ab85-e4a870a278b7] 2025-11-11 00:01:32.018940 | orchestrator | 00:01:32.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-11-11 00:01:32.026121 | orchestrator | 00:01:32.025 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=75ea1c13-08ac-4925-8283-d5e2f994ce5d] 2025-11-11 00:01:32.032491 | orchestrator | 00:01:32.032 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-11-11 00:01:32.033878 | orchestrator | 00:01:32.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=83daedb9-81f3-45a4-88c7-2785338cd97e] 2025-11-11 00:01:32.052872 | orchestrator | 00:01:32.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-11-11 00:01:32.122411 | orchestrator | 00:01:32.122 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=389e8dac-4c9f-40ba-96aa-7c861964ff1c] 2025-11-11 00:01:32.129812 | orchestrator | 00:01:32.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-11-11 00:01:32.135220 | orchestrator | 00:01:32.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=f9373fbe-39b8-4f8c-b928-1a6d36b5f860] 2025-11-11 00:01:32.140314 | orchestrator | 00:01:32.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=0178bab0-214e-4a1b-9430-5e2bb66f07d3] 2025-11-11 00:01:32.150808 | orchestrator | 00:01:32.150 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-11-11 00:01:32.151539 | orchestrator | 00:01:32.151 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=40873841-1866-4eee-bbb6-ab8fbb214882] 2025-11-11 00:01:32.941370 | orchestrator | 00:01:32.941 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=1a6a1253-319f-4e19-bfbc-b51e170f04a1] 2025-11-11 00:01:32.949529 | orchestrator | 00:01:32.949 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-11-11 00:01:33.463782 | orchestrator | 00:01:33.463 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=421565a7-619c-4c40-87e1-064cd0d2eab0] 2025-11-11 00:01:35.405047 | orchestrator | 00:01:35.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d762de08-88e9-4a05-8401-0b276306fde5] 2025-11-11 00:01:35.420316 | orchestrator | 00:01:35.419 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=8b5ddd48-c5c8-4302-a57f-63bca86c5d46] 2025-11-11 00:01:35.451235 | orchestrator | 00:01:35.450 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=4fce1e7d-7889-4141-aff9-09cb3f25b974] 2025-11-11 00:01:35.459101 | orchestrator | 00:01:35.458 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=4d550be8-3d05-49fa-a4d6-b58d7283d515] 2025-11-11 00:01:35.517098 | orchestrator | 00:01:35.516 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=17edd79b-0338-48a8-aec5-a06e3eed4f01] 2025-11-11 00:01:35.519046 | orchestrator | 00:01:35.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=8f52fcc5-8f85-4748-8d8f-0da86b7c7013] 2025-11-11 00:01:36.001046 | orchestrator | 00:01:36.000 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=ade83d93-4646-4784-acf3-61cc74e03ccd] 2025-11-11 00:01:36.014494 | orchestrator | 00:01:36.014 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-11-11 00:01:36.014696 | orchestrator | 00:01:36.014 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-11-11 00:01:36.015157 | orchestrator | 00:01:36.015 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-11-11 00:01:36.238237 | orchestrator | 00:01:36.237 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=bdbfbb48-fe8e-4a88-a46e-704de6dbe9d6] 2025-11-11 00:01:36.248772 | orchestrator | 00:01:36.248 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-11-11 00:01:36.249033 | orchestrator | 00:01:36.248 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-11-11 00:01:36.250563 | orchestrator | 00:01:36.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-11-11 00:01:36.250596 | orchestrator | 00:01:36.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-11-11 00:01:36.252190 | orchestrator | 00:01:36.252 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-11-11 00:01:36.253319 | orchestrator | 00:01:36.253 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-11-11 00:01:36.260980 | orchestrator | 00:01:36.260 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=85de8c83-6903-4f86-a3f2-6589e010ad9e] 2025-11-11 00:01:36.271130 | orchestrator | 00:01:36.271 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-11-11 00:01:36.271557 | orchestrator | 00:01:36.271 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-11-11 00:01:36.272449 | orchestrator | 00:01:36.272 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-11-11 00:01:36.446392 | orchestrator | 00:01:36.446 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=690e5546-a8bf-4ff4-b2b4-676e26e0005b] 2025-11-11 00:01:36.451342 | orchestrator | 00:01:36.451 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-11-11 00:01:36.692584 | orchestrator | 00:01:36.692 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=bb35611c-65c6-4812-8dfe-73a817e3fb00] 2025-11-11 00:01:36.708291 | orchestrator | 00:01:36.708 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-11-11 00:01:36.879893 | orchestrator | 00:01:36.879 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=8dcf13b9-9617-4f95-be41-aea070639f0e] 2025-11-11 00:01:36.889492 | orchestrator | 00:01:36.889 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-11-11 00:01:36.895244 | orchestrator | 00:01:36.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=2ddb116e-37d3-46a2-8de8-89f0335a9e34] 2025-11-11 00:01:36.910077 | orchestrator | 00:01:36.909 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-11-11 00:01:36.922586 | orchestrator | 00:01:36.922 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=cbbb3b78-0c4f-4c9d-8646-ccb4d26dbeb0] 2025-11-11 00:01:36.930941 | orchestrator | 00:01:36.930 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-11-11 00:01:37.064464 | orchestrator | 00:01:37.064 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=aca96f3a-39e9-4ebc-abb5-df5bc501ca19] 2025-11-11 00:01:37.077192 | orchestrator | 00:01:37.076 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-11-11 00:01:37.122478 | orchestrator | 00:01:37.122 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=bd709e9e-b494-4e64-88bf-810681a109ad] 2025-11-11 00:01:37.127976 | orchestrator | 00:01:37.127 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-11-11 00:01:37.338730 | orchestrator | 00:01:37.338 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=1e33a75a-38f3-41d2-aa0f-d314a5b8851d] 2025-11-11 00:01:37.443240 | orchestrator | 00:01:37.442 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=4a3981e2-f0be-42b6-8db3-1c199c343817] 2025-11-11 00:01:37.468627 | orchestrator | 00:01:37.468 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=4fe755f0-d785-40ac-8fd2-f4d04e795741] 2025-11-11 00:01:37.763485 | orchestrator | 00:01:37.763 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=c10d1a6a-9341-433b-bd81-178f9f69a248] 2025-11-11 00:01:37.770574 | orchestrator | 00:01:37.770 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d9417825-210b-4296-b532-140164fd716b] 2025-11-11 00:01:37.813364 | orchestrator | 00:01:37.813 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=046891ca-0b5c-40aa-b354-5b0e5c7203aa] 2025-11-11 00:01:37.942559 | orchestrator | 00:01:37.941 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8e156d3c-5b80-4fbd-ad5c-5330981c24e0] 2025-11-11 00:01:38.012646 | orchestrator | 00:01:38.012 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=c3652f35-8bb3-4ae5-ae2e-50d763285889] 2025-11-11 00:01:38.750394 | orchestrator | 00:01:38.750 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=c81b3325-afc5-4c8d-a3cb-d2e3d5168dc8] 2025-11-11 00:01:39.640053 | orchestrator | 00:01:39.639 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=7540477d-7ad7-4bf6-b2f7-64483ec57c51] 2025-11-11 00:01:39.654188 | orchestrator | 00:01:39.653 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-11-11 00:01:39.678359 | orchestrator | 00:01:39.678 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-11-11 00:01:39.678552 | orchestrator | 00:01:39.678 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-11-11 00:01:39.682189 | orchestrator | 00:01:39.682 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-11-11 00:01:39.686162 | orchestrator | 00:01:39.686 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-11-11 00:01:39.697997 | orchestrator | 00:01:39.697 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-11-11 00:01:39.699285 | orchestrator | 00:01:39.699 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-11-11 00:01:41.088214 | orchestrator | 00:01:41.088 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=0a610670-1fb7-41ca-9ce3-9e706e21f511] 2025-11-11 00:01:41.092739 | orchestrator | 00:01:41.092 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-11-11 00:01:41.103690 | orchestrator | 00:01:41.103 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-11-11 00:01:41.105799 | orchestrator | 00:01:41.105 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f7d18a4a08eb6205fb807e315fb53a19efefbd1d] 2025-11-11 00:01:41.113966 | orchestrator | 00:01:41.113 STDOUT terraform: local_file.inventory: Creating... 2025-11-11 00:01:41.117576 | orchestrator | 00:01:41.117 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=18591da6f02f084f02f1deba077897ed5e4364b6] 2025-11-11 00:01:42.031224 | orchestrator | 00:01:42.030 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0a610670-1fb7-41ca-9ce3-9e706e21f511] 2025-11-11 00:01:49.682648 | orchestrator | 00:01:49.682 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-11-11 00:01:49.684892 | orchestrator | 00:01:49.684 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-11-11 00:01:49.685019 | orchestrator | 00:01:49.684 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-11-11 00:01:49.687862 | orchestrator | 00:01:49.687 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-11-11 00:01:49.698457 | orchestrator | 00:01:49.698 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-11-11 00:01:49.702651 | orchestrator | 00:01:49.702 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-11-11 00:01:59.683209 | orchestrator | 00:01:59.682 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-11-11 00:01:59.685382 | orchestrator | 00:01:59.685 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-11-11 00:01:59.685482 | orchestrator | 00:01:59.685 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-11-11 00:01:59.687685 | orchestrator | 00:01:59.687 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-11-11 00:01:59.699275 | orchestrator | 00:01:59.699 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-11-11 00:01:59.703845 | orchestrator | 00:01:59.703 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-11-11 00:02:00.231434 | orchestrator | 00:02:00.231 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=674c0e8b-4d06-48cb-9c25-2af797ba4374] 2025-11-11 00:02:00.248493 | orchestrator | 00:02:00.248 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=b6e9ed6c-4f4a-439c-a457-b3f2bdf44f97] 2025-11-11 00:02:00.404696 | orchestrator | 00:02:00.404 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=583a3195-af7d-47a0-870b-c7b1265ccda4] 2025-11-11 00:02:09.685798 | orchestrator | 00:02:09.685 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-11-11 00:02:09.685925 | orchestrator | 00:02:09.685 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-11-11 00:02:09.700274 | orchestrator | 00:02:09.699 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-11-11 00:02:10.346323 | orchestrator | 00:02:10.346 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=2729c1b3-6b0d-40d7-bb6f-59003336af51] 2025-11-11 00:02:10.381648 | orchestrator | 00:02:10.381 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=917ddfe6-f6ed-4e37-b84a-ae02c17f7483] 2025-11-11 00:02:10.767840 | orchestrator | 00:02:10.767 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=beab5880-4dc5-4708-b363-d44b65c01b5e] 2025-11-11 00:02:10.787621 | orchestrator | 00:02:10.787 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-11-11 00:02:10.810847 | orchestrator | 00:02:10.810 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-11-11 00:02:10.811754 | orchestrator | 00:02:10.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-11-11 00:02:10.820683 | orchestrator | 00:02:10.820 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-11-11 00:02:10.824447 | orchestrator | 00:02:10.824 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7998560302275164604] 2025-11-11 00:02:10.829500 | orchestrator | 00:02:10.827 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-11-11 00:02:10.831665 | orchestrator | 00:02:10.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-11-11 00:02:10.831738 | orchestrator | 00:02:10.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-11-11 00:02:10.833354 | orchestrator | 00:02:10.833 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-11-11 00:02:10.839436 | orchestrator | 00:02:10.839 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-11-11 00:02:10.874665 | orchestrator | 00:02:10.874 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-11-11 00:02:10.894767 | orchestrator | 00:02:10.894 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-11-11 00:02:14.722106 | orchestrator | 00:02:14.721 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=b6e9ed6c-4f4a-439c-a457-b3f2bdf44f97/9b408528-4a47-4f88-ab85-e4a870a278b7] 2025-11-11 00:02:14.752507 | orchestrator | 00:02:14.752 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=674c0e8b-4d06-48cb-9c25-2af797ba4374/89b8de45-7543-4421-bfde-713d4c35668f] 2025-11-11 00:02:14.800752 | orchestrator | 00:02:14.800 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=b6e9ed6c-4f4a-439c-a457-b3f2bdf44f97/389e8dac-4c9f-40ba-96aa-7c861964ff1c] 2025-11-11 00:02:14.830887 | orchestrator | 00:02:14.830 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=674c0e8b-4d06-48cb-9c25-2af797ba4374/75ea1c13-08ac-4925-8283-d5e2f994ce5d] 2025-11-11 00:02:14.846117 | orchestrator | 00:02:14.845 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=2729c1b3-6b0d-40d7-bb6f-59003336af51/0178bab0-214e-4a1b-9430-5e2bb66f07d3] 2025-11-11 00:02:14.864830 | orchestrator | 00:02:14.864 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=2729c1b3-6b0d-40d7-bb6f-59003336af51/f9373fbe-39b8-4f8c-b928-1a6d36b5f860] 2025-11-11 00:02:20.835177 | orchestrator | 00:02:20.834 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Still creating... [10s elapsed] 2025-11-11 00:02:20.841715 | orchestrator | 00:02:20.841 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Still creating... [10s elapsed] 2025-11-11 00:02:20.876930 | orchestrator | 00:02:20.876 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Still creating... [10s elapsed] 2025-11-11 00:02:20.896392 | orchestrator | 00:02:20.896 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-11-11 00:02:20.937126 | orchestrator | 00:02:20.936 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=674c0e8b-4d06-48cb-9c25-2af797ba4374/40873841-1866-4eee-bbb6-ab8fbb214882] 2025-11-11 00:02:20.951772 | orchestrator | 00:02:20.951 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=b6e9ed6c-4f4a-439c-a457-b3f2bdf44f97/83daedb9-81f3-45a4-88c7-2785338cd97e] 2025-11-11 00:02:20.969437 | orchestrator | 00:02:20.969 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=2729c1b3-6b0d-40d7-bb6f-59003336af51/e779f17b-a915-42a5-9da7-11da2e062a34] 2025-11-11 00:02:30.900642 | orchestrator | 00:02:30.900 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-11-11 00:02:31.518689 | orchestrator | 00:02:31.518 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=7af1529d-4ed2-4f74-94af-a15d30eb7c9b] 2025-11-11 00:02:31.537900 | orchestrator | 00:02:31.537 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-11-11 00:02:31.538332 | orchestrator | 00:02:31.537 STDOUT terraform: Outputs: 2025-11-11 00:02:31.538452 | orchestrator | 00:02:31.538 STDOUT terraform: manager_address = 2025-11-11 00:02:31.538475 | orchestrator | 00:02:31.538 STDOUT terraform: private_key = 2025-11-11 00:02:32.022355 | orchestrator | ok: Runtime: 0:01:09.404947 2025-11-11 00:02:32.067955 | 2025-11-11 00:02:32.068071 | TASK [Create infrastructure (stable)] 2025-11-11 00:02:32.599468 | orchestrator | skipping: Conditional result was False 2025-11-11 00:02:32.615838 | 2025-11-11 00:02:32.615968 | TASK [Fetch manager address] 2025-11-11 00:02:33.037591 | orchestrator | ok 2025-11-11 00:02:33.047236 | 2025-11-11 00:02:33.047341 | TASK [Set manager_host address] 2025-11-11 00:02:33.115354 | orchestrator | ok 2025-11-11 00:02:33.124301 | 2025-11-11 00:02:33.124403 | LOOP [Update ansible collections] 2025-11-11 00:02:35.181759 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-11 00:02:35.182135 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-11 00:02:35.182201 | orchestrator | Starting galaxy collection install process 2025-11-11 00:02:35.182244 | orchestrator | Process install dependency map 2025-11-11 00:02:35.182282 | orchestrator | Starting collection install process 2025-11-11 00:02:35.182318 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-11-11 00:02:35.182359 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-11-11 00:02:35.182401 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-11-11 00:02:35.182476 | orchestrator | ok: Item: commons Runtime: 0:00:01.721685 2025-11-11 00:02:36.440825 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-11 00:02:36.440968 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-11 00:02:36.441020 | orchestrator | Starting galaxy collection install process 2025-11-11 00:02:36.441060 | orchestrator | Process install dependency map 2025-11-11 00:02:36.441099 | orchestrator | Starting collection install process 2025-11-11 00:02:36.441134 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-11-11 00:02:36.441169 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-11-11 00:02:36.441204 | orchestrator | osism.services:999.0.0 was installed successfully 2025-11-11 00:02:36.441257 | orchestrator | ok: Item: services Runtime: 0:00:00.970850 2025-11-11 00:02:36.456698 | 2025-11-11 00:02:36.456885 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-11 00:02:47.058115 | orchestrator | ok 2025-11-11 00:02:47.069038 | 2025-11-11 00:02:47.069153 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-11 00:03:47.108457 | orchestrator | ok 2025-11-11 00:03:47.115316 | 2025-11-11 00:03:47.115400 | TASK [Fetch manager ssh hostkey] 2025-11-11 00:03:48.672089 | orchestrator | Output suppressed because no_log was given 2025-11-11 00:03:48.678554 | 2025-11-11 00:03:48.678643 | TASK [Get ssh keypair from terraform environment] 2025-11-11 00:03:49.210785 | orchestrator | ok: Runtime: 0:00:00.011091 2025-11-11 00:03:49.227790 | 2025-11-11 00:03:49.227939 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-11 00:03:49.259186 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-11-11 00:03:49.267190 | 2025-11-11 00:03:49.267279 | TASK [Run manager part 0] 2025-11-11 00:03:50.356919 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-11 00:03:50.402298 | orchestrator | 2025-11-11 00:03:50.402340 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-11-11 00:03:50.402348 | orchestrator | 2025-11-11 00:03:50.402360 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-11-11 00:03:52.074011 | orchestrator | ok: [testbed-manager] 2025-11-11 00:03:52.074143 | orchestrator | 2025-11-11 00:03:52.074188 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-11 00:03:52.074206 | orchestrator | 2025-11-11 00:03:52.074223 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:03:53.933198 | orchestrator | ok: [testbed-manager] 2025-11-11 00:03:53.933259 | orchestrator | 2025-11-11 00:03:53.933266 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-11 00:03:54.675571 | orchestrator | ok: [testbed-manager] 2025-11-11 00:03:54.675900 | orchestrator | 2025-11-11 00:03:54.675931 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-11 00:03:54.735957 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.736065 | orchestrator | 2025-11-11 00:03:54.736083 | orchestrator | TASK [Update package cache] **************************************************** 2025-11-11 00:03:54.777382 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.777473 | orchestrator | 2025-11-11 00:03:54.777483 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-11 00:03:54.815084 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.815162 | orchestrator | 2025-11-11 00:03:54.815172 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-11 00:03:54.851332 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.851422 | orchestrator | 2025-11-11 00:03:54.851433 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-11 00:03:54.890396 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.890462 | orchestrator | 2025-11-11 00:03:54.890471 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-11-11 00:03:54.929434 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.929500 | orchestrator | 2025-11-11 00:03:54.929508 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-11-11 00:03:54.961430 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:03:54.961483 | orchestrator | 2025-11-11 00:03:54.961491 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-11-11 00:03:55.675631 | orchestrator | changed: [testbed-manager] 2025-11-11 00:03:55.675691 | orchestrator | 2025-11-11 00:03:55.675700 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-11-11 00:06:29.982391 | orchestrator | changed: [testbed-manager] 2025-11-11 00:06:29.982459 | orchestrator | 2025-11-11 00:06:29.982477 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-11 00:07:50.904063 | orchestrator | changed: [testbed-manager] 2025-11-11 00:07:50.904165 | orchestrator | 2025-11-11 00:07:50.904182 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-11 00:08:10.785357 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:10.785441 | orchestrator | 2025-11-11 00:08:10.785460 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-11 00:08:19.298686 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:19.298725 | orchestrator | 2025-11-11 00:08:19.298733 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-11 00:08:19.347666 | orchestrator | ok: [testbed-manager] 2025-11-11 00:08:19.347701 | orchestrator | 2025-11-11 00:08:19.347709 | orchestrator | TASK [Get current user] ******************************************************** 2025-11-11 00:08:20.086374 | orchestrator | ok: [testbed-manager] 2025-11-11 00:08:20.086413 | orchestrator | 2025-11-11 00:08:20.086424 | orchestrator | TASK [Create venv directory] *************************************************** 2025-11-11 00:08:20.759667 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:20.759706 | orchestrator | 2025-11-11 00:08:20.759715 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-11-11 00:08:27.029741 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:27.029854 | orchestrator | 2025-11-11 00:08:27.029895 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-11-11 00:08:32.863497 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:32.863537 | orchestrator | 2025-11-11 00:08:32.863547 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-11-11 00:08:35.354156 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:35.354216 | orchestrator | 2025-11-11 00:08:35.354232 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-11-11 00:08:37.056272 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:37.056351 | orchestrator | 2025-11-11 00:08:37.056366 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-11-11 00:08:38.150403 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-11 00:08:38.151256 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-11 00:08:38.151267 | orchestrator | 2025-11-11 00:08:38.151277 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-11-11 00:08:38.191237 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-11 00:08:38.191272 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-11 00:08:38.191281 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-11 00:08:38.191289 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-11 00:08:41.695075 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-11 00:08:41.695128 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-11 00:08:41.695135 | orchestrator | 2025-11-11 00:08:41.695142 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-11-11 00:08:42.242340 | orchestrator | changed: [testbed-manager] 2025-11-11 00:08:42.242399 | orchestrator | 2025-11-11 00:08:42.242410 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-11-11 00:10:02.889191 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-11-11 00:10:02.889276 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-11-11 00:10:02.889292 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-11-11 00:10:02.889305 | orchestrator | 2025-11-11 00:10:02.889317 | orchestrator | TASK [Install local collections] *********************************************** 2025-11-11 00:10:05.129197 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-11-11 00:10:05.129228 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-11-11 00:10:05.129232 | orchestrator | 2025-11-11 00:10:05.129237 | orchestrator | PLAY [Create operator user] **************************************************** 2025-11-11 00:10:05.129241 | orchestrator | 2025-11-11 00:10:05.129245 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:10:06.492195 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:06.492228 | orchestrator | 2025-11-11 00:10:06.492236 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-11 00:10:06.538954 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:06.538990 | orchestrator | 2025-11-11 00:10:06.538999 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-11 00:10:06.609483 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:06.609520 | orchestrator | 2025-11-11 00:10:06.609528 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-11 00:10:07.326099 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:07.326175 | orchestrator | 2025-11-11 00:10:07.326190 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-11 00:10:08.159603 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:08.241166 | orchestrator | 2025-11-11 00:10:08.241220 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-11 00:10:09.500580 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-11-11 00:10:09.500650 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-11-11 00:10:09.500664 | orchestrator | 2025-11-11 00:10:09.500690 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-11 00:10:10.842181 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:10.842285 | orchestrator | 2025-11-11 00:10:10.842301 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-11 00:10:12.537629 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:10:12.537716 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-11-11 00:10:12.537730 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:10:12.537760 | orchestrator | 2025-11-11 00:10:12.537773 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-11 00:10:12.596589 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:12.596661 | orchestrator | 2025-11-11 00:10:12.596675 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-11 00:10:12.668857 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:12.668946 | orchestrator | 2025-11-11 00:10:12.668963 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-11 00:10:13.197075 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:13.197182 | orchestrator | 2025-11-11 00:10:13.197199 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-11 00:10:13.267067 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:13.267111 | orchestrator | 2025-11-11 00:10:13.267117 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-11 00:10:14.060688 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:10:14.060787 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:14.060803 | orchestrator | 2025-11-11 00:10:14.060816 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-11 00:10:14.098674 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:14.098775 | orchestrator | 2025-11-11 00:10:14.098793 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-11 00:10:14.131292 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:14.131350 | orchestrator | 2025-11-11 00:10:14.131364 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-11 00:10:14.163906 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:14.163973 | orchestrator | 2025-11-11 00:10:14.163989 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-11 00:10:14.224949 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:14.225009 | orchestrator | 2025-11-11 00:10:14.225023 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-11 00:10:14.932384 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:14.932468 | orchestrator | 2025-11-11 00:10:14.932484 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-11 00:10:14.932497 | orchestrator | 2025-11-11 00:10:14.932511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:10:16.315498 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:16.315578 | orchestrator | 2025-11-11 00:10:16.315594 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-11-11 00:10:17.254002 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:17.254121 | orchestrator | 2025-11-11 00:10:17.254138 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:10:17.254150 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2025-11-11 00:10:17.254161 | orchestrator | 2025-11-11 00:10:17.558102 | orchestrator | ok: Runtime: 0:06:27.690275 2025-11-11 00:10:17.575988 | 2025-11-11 00:10:17.576126 | TASK [Point out that the log in on the manager is now possible] 2025-11-11 00:10:17.612362 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-11-11 00:10:17.621793 | 2025-11-11 00:10:17.621888 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-11 00:10:17.659393 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-11-11 00:10:17.669272 | 2025-11-11 00:10:17.669371 | TASK [Run manager part 1 + 2] 2025-11-11 00:10:18.439470 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-11 00:10:18.484272 | orchestrator | 2025-11-11 00:10:18.484308 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-11-11 00:10:18.484315 | orchestrator | 2025-11-11 00:10:18.484326 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:10:21.343781 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:21.343822 | orchestrator | 2025-11-11 00:10:21.343844 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-11 00:10:21.378255 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:21.378292 | orchestrator | 2025-11-11 00:10:21.378300 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-11 00:10:21.419968 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:21.420010 | orchestrator | 2025-11-11 00:10:21.420019 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-11 00:10:21.464897 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:21.464942 | orchestrator | 2025-11-11 00:10:21.464951 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-11 00:10:21.533235 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:21.533282 | orchestrator | 2025-11-11 00:10:21.533292 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-11 00:10:21.588870 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:21.588914 | orchestrator | 2025-11-11 00:10:21.588923 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-11 00:10:21.628856 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-11-11 00:10:21.628890 | orchestrator | 2025-11-11 00:10:21.628895 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-11 00:10:22.309487 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:22.309536 | orchestrator | 2025-11-11 00:10:22.309546 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-11 00:10:22.357018 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:22.357064 | orchestrator | 2025-11-11 00:10:22.357072 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-11 00:10:23.622282 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:23.622373 | orchestrator | 2025-11-11 00:10:23.622393 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-11 00:10:24.172776 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:24.172856 | orchestrator | 2025-11-11 00:10:24.172873 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-11 00:10:25.273500 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:25.273560 | orchestrator | 2025-11-11 00:10:25.273577 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-11 00:10:43.820594 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:43.820996 | orchestrator | 2025-11-11 00:10:43.821021 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-11 00:10:44.480325 | orchestrator | ok: [testbed-manager] 2025-11-11 00:10:44.480418 | orchestrator | 2025-11-11 00:10:44.480437 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-11 00:10:44.531242 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:44.531330 | orchestrator | 2025-11-11 00:10:44.531348 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-11-11 00:10:45.764930 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:45.764982 | orchestrator | 2025-11-11 00:10:45.764993 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-11-11 00:10:46.726743 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:46.726806 | orchestrator | 2025-11-11 00:10:46.726813 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-11-11 00:10:47.276325 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:47.276415 | orchestrator | 2025-11-11 00:10:47.276433 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-11-11 00:10:47.318242 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-11 00:10:47.318312 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-11 00:10:47.318318 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-11 00:10:47.318323 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-11 00:10:49.131049 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:49.131139 | orchestrator | 2025-11-11 00:10:49.131157 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-11-11 00:10:57.537966 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-11-11 00:10:57.538012 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-11-11 00:10:57.538071 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-11-11 00:10:57.538079 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-11-11 00:10:57.538089 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-11-11 00:10:57.538096 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-11-11 00:10:57.538102 | orchestrator | 2025-11-11 00:10:57.538109 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-11-11 00:10:58.564429 | orchestrator | changed: [testbed-manager] 2025-11-11 00:10:58.564468 | orchestrator | 2025-11-11 00:10:58.564477 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-11-11 00:10:58.604024 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:10:58.604054 | orchestrator | 2025-11-11 00:10:58.604063 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-11-11 00:11:01.609005 | orchestrator | changed: [testbed-manager] 2025-11-11 00:11:01.609602 | orchestrator | 2025-11-11 00:11:01.609620 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-11-11 00:11:01.651211 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:11:01.651241 | orchestrator | 2025-11-11 00:11:01.651250 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-11-11 00:12:38.101624 | orchestrator | changed: [testbed-manager] 2025-11-11 00:12:38.101716 | orchestrator | 2025-11-11 00:12:38.101736 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-11 00:12:39.163541 | orchestrator | ok: [testbed-manager] 2025-11-11 00:12:39.163600 | orchestrator | 2025-11-11 00:12:39.163616 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:12:39.163628 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-11-11 00:12:39.163638 | orchestrator | 2025-11-11 00:12:39.285461 | orchestrator | ok: Runtime: 0:02:21.253449 2025-11-11 00:12:39.301515 | 2025-11-11 00:12:39.301678 | TASK [Reboot manager] 2025-11-11 00:12:40.838648 | orchestrator | ok: Runtime: 0:00:00.929000 2025-11-11 00:12:40.854232 | 2025-11-11 00:12:40.854363 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-11 00:12:58.034244 | orchestrator | ok 2025-11-11 00:12:58.044639 | 2025-11-11 00:12:58.044761 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-11 00:13:58.081789 | orchestrator | ok 2025-11-11 00:13:58.094952 | 2025-11-11 00:13:58.095180 | TASK [Deploy manager + bootstrap nodes] 2025-11-11 00:14:02.261456 | orchestrator | 2025-11-11 00:14:02.261676 | orchestrator | # DEPLOY MANAGER 2025-11-11 00:14:02.261702 | orchestrator | 2025-11-11 00:14:02.261717 | orchestrator | + set -e 2025-11-11 00:14:02.261731 | orchestrator | + echo 2025-11-11 00:14:02.261745 | orchestrator | + echo '# DEPLOY MANAGER' 2025-11-11 00:14:02.261762 | orchestrator | + echo 2025-11-11 00:14:02.261810 | orchestrator | + cat /opt/manager-vars.sh 2025-11-11 00:14:02.265126 | orchestrator | export NUMBER_OF_NODES=6 2025-11-11 00:14:02.265151 | orchestrator | 2025-11-11 00:14:02.265163 | orchestrator | export CEPH_VERSION=reef 2025-11-11 00:14:02.265176 | orchestrator | export CONFIGURATION_VERSION=main 2025-11-11 00:14:02.265188 | orchestrator | export MANAGER_VERSION=latest 2025-11-11 00:14:02.265209 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-11-11 00:14:02.265221 | orchestrator | 2025-11-11 00:14:02.265239 | orchestrator | export ARA=false 2025-11-11 00:14:02.265250 | orchestrator | export DEPLOY_MODE=manager 2025-11-11 00:14:02.265267 | orchestrator | export TEMPEST=true 2025-11-11 00:14:02.265279 | orchestrator | export IS_ZUUL=true 2025-11-11 00:14:02.265290 | orchestrator | 2025-11-11 00:14:02.265308 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:14:02.265319 | orchestrator | export EXTERNAL_API=false 2025-11-11 00:14:02.265330 | orchestrator | 2025-11-11 00:14:02.265341 | orchestrator | export IMAGE_USER=ubuntu 2025-11-11 00:14:02.265355 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-11-11 00:14:02.265366 | orchestrator | 2025-11-11 00:14:02.265377 | orchestrator | export CEPH_STACK=ceph-ansible 2025-11-11 00:14:02.265393 | orchestrator | 2025-11-11 00:14:02.265404 | orchestrator | + echo 2025-11-11 00:14:02.265417 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-11 00:14:02.266437 | orchestrator | ++ export INTERACTIVE=false 2025-11-11 00:14:02.266456 | orchestrator | ++ INTERACTIVE=false 2025-11-11 00:14:02.266470 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-11 00:14:02.266486 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-11 00:14:02.266671 | orchestrator | + source /opt/manager-vars.sh 2025-11-11 00:14:02.266687 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-11 00:14:02.266698 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-11 00:14:02.266714 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-11 00:14:02.266725 | orchestrator | ++ CEPH_VERSION=reef 2025-11-11 00:14:02.266736 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-11 00:14:02.266754 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-11 00:14:02.266765 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-11 00:14:02.266776 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-11 00:14:02.266790 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-11 00:14:02.266809 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-11 00:14:02.266821 | orchestrator | ++ export ARA=false 2025-11-11 00:14:02.266832 | orchestrator | ++ ARA=false 2025-11-11 00:14:02.266843 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-11 00:14:02.266854 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-11 00:14:02.266864 | orchestrator | ++ export TEMPEST=true 2025-11-11 00:14:02.266883 | orchestrator | ++ TEMPEST=true 2025-11-11 00:14:02.266894 | orchestrator | ++ export IS_ZUUL=true 2025-11-11 00:14:02.266904 | orchestrator | ++ IS_ZUUL=true 2025-11-11 00:14:02.266915 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:14:02.266930 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:14:02.266941 | orchestrator | ++ export EXTERNAL_API=false 2025-11-11 00:14:02.266952 | orchestrator | ++ EXTERNAL_API=false 2025-11-11 00:14:02.266963 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-11 00:14:02.266974 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-11 00:14:02.266985 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-11 00:14:02.266995 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-11 00:14:02.267006 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-11 00:14:02.267017 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-11 00:14:02.267029 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-11-11 00:14:02.322111 | orchestrator | + docker version 2025-11-11 00:14:02.598270 | orchestrator | Client: Docker Engine - Community 2025-11-11 00:14:02.598342 | orchestrator | Version: 27.5.1 2025-11-11 00:14:02.598353 | orchestrator | API version: 1.47 2025-11-11 00:14:02.598362 | orchestrator | Go version: go1.22.11 2025-11-11 00:14:02.598369 | orchestrator | Git commit: 9f9e405 2025-11-11 00:14:02.598376 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-11 00:14:02.598384 | orchestrator | OS/Arch: linux/amd64 2025-11-11 00:14:02.598391 | orchestrator | Context: default 2025-11-11 00:14:02.598398 | orchestrator | 2025-11-11 00:14:02.598405 | orchestrator | Server: Docker Engine - Community 2025-11-11 00:14:02.598412 | orchestrator | Engine: 2025-11-11 00:14:02.598419 | orchestrator | Version: 27.5.1 2025-11-11 00:14:02.598426 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-11-11 00:14:02.598453 | orchestrator | Go version: go1.22.11 2025-11-11 00:14:02.598461 | orchestrator | Git commit: 4c9b3b0 2025-11-11 00:14:02.598468 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-11 00:14:02.598474 | orchestrator | OS/Arch: linux/amd64 2025-11-11 00:14:02.598481 | orchestrator | Experimental: false 2025-11-11 00:14:02.598488 | orchestrator | containerd: 2025-11-11 00:14:02.598495 | orchestrator | Version: v2.1.5 2025-11-11 00:14:02.598502 | orchestrator | GitCommit: fcd43222d6b07379a4be9786bda52438f0dd16a1 2025-11-11 00:14:02.598509 | orchestrator | runc: 2025-11-11 00:14:02.598516 | orchestrator | Version: 1.3.3 2025-11-11 00:14:02.598523 | orchestrator | GitCommit: v1.3.3-0-gd842d771 2025-11-11 00:14:02.598531 | orchestrator | docker-init: 2025-11-11 00:14:02.598538 | orchestrator | Version: 0.19.0 2025-11-11 00:14:02.598545 | orchestrator | GitCommit: de40ad0 2025-11-11 00:14:02.600955 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-11-11 00:14:02.608090 | orchestrator | + set -e 2025-11-11 00:14:02.608135 | orchestrator | + source /opt/manager-vars.sh 2025-11-11 00:14:02.608180 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-11 00:14:02.608208 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-11 00:14:02.608220 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-11 00:14:02.608231 | orchestrator | ++ CEPH_VERSION=reef 2025-11-11 00:14:02.608243 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-11 00:14:02.608254 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-11 00:14:02.608276 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-11 00:14:02.608287 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-11 00:14:02.608341 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-11 00:14:02.608355 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-11 00:14:02.608372 | orchestrator | ++ export ARA=false 2025-11-11 00:14:02.608383 | orchestrator | ++ ARA=false 2025-11-11 00:14:02.608394 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-11 00:14:02.608406 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-11 00:14:02.608417 | orchestrator | ++ export TEMPEST=true 2025-11-11 00:14:02.608427 | orchestrator | ++ TEMPEST=true 2025-11-11 00:14:02.608438 | orchestrator | ++ export IS_ZUUL=true 2025-11-11 00:14:02.608449 | orchestrator | ++ IS_ZUUL=true 2025-11-11 00:14:02.608460 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:14:02.608471 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:14:02.608482 | orchestrator | ++ export EXTERNAL_API=false 2025-11-11 00:14:02.608493 | orchestrator | ++ EXTERNAL_API=false 2025-11-11 00:14:02.608504 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-11 00:14:02.608515 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-11 00:14:02.608530 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-11 00:14:02.608541 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-11 00:14:02.608552 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-11 00:14:02.608563 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-11 00:14:02.608574 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-11 00:14:02.608609 | orchestrator | ++ export INTERACTIVE=false 2025-11-11 00:14:02.608620 | orchestrator | ++ INTERACTIVE=false 2025-11-11 00:14:02.608638 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-11 00:14:02.608653 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-11 00:14:02.608669 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-11 00:14:02.608680 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-11 00:14:02.608692 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-11-11 00:14:02.615793 | orchestrator | + set -e 2025-11-11 00:14:02.615819 | orchestrator | + VERSION=reef 2025-11-11 00:14:02.616683 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-11 00:14:02.620816 | orchestrator | + [[ -n ceph_version: reef ]] 2025-11-11 00:14:02.620840 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-11-11 00:14:02.627218 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-11-11 00:14:02.633625 | orchestrator | + set -e 2025-11-11 00:14:02.633651 | orchestrator | + VERSION=2024.2 2025-11-11 00:14:02.634688 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-11 00:14:02.638333 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-11-11 00:14:02.638359 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-11-11 00:14:02.641996 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-11-11 00:14:02.642998 | orchestrator | ++ semver latest 7.0.0 2025-11-11 00:14:02.703513 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-11 00:14:02.703600 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-11 00:14:02.703615 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-11-11 00:14:02.703627 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-11-11 00:14:02.791067 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-11 00:14:02.792432 | orchestrator | + source /opt/venv/bin/activate 2025-11-11 00:14:02.793575 | orchestrator | ++ deactivate nondestructive 2025-11-11 00:14:02.793662 | orchestrator | ++ '[' -n '' ']' 2025-11-11 00:14:02.793677 | orchestrator | ++ '[' -n '' ']' 2025-11-11 00:14:02.793689 | orchestrator | ++ hash -r 2025-11-11 00:14:02.793834 | orchestrator | ++ '[' -n '' ']' 2025-11-11 00:14:02.793849 | orchestrator | ++ unset VIRTUAL_ENV 2025-11-11 00:14:02.793860 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-11-11 00:14:02.793871 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-11-11 00:14:02.793985 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-11-11 00:14:02.794001 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-11-11 00:14:02.794012 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-11-11 00:14:02.794092 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-11-11 00:14:02.794225 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-11 00:14:02.794242 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-11 00:14:02.794253 | orchestrator | ++ export PATH 2025-11-11 00:14:02.794264 | orchestrator | ++ '[' -n '' ']' 2025-11-11 00:14:02.794374 | orchestrator | ++ '[' -z '' ']' 2025-11-11 00:14:02.794387 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-11-11 00:14:02.794398 | orchestrator | ++ PS1='(venv) ' 2025-11-11 00:14:02.794409 | orchestrator | ++ export PS1 2025-11-11 00:14:02.794428 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-11-11 00:14:02.794439 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-11-11 00:14:02.794454 | orchestrator | ++ hash -r 2025-11-11 00:14:02.794524 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-11-11 00:14:03.986932 | orchestrator | 2025-11-11 00:14:03.987025 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-11-11 00:14:03.987032 | orchestrator | 2025-11-11 00:14:03.987036 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-11 00:14:04.553002 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:04.553116 | orchestrator | 2025-11-11 00:14:04.553131 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-11 00:14:05.535801 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:05.535923 | orchestrator | 2025-11-11 00:14:05.535941 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-11-11 00:14:05.535955 | orchestrator | 2025-11-11 00:14:05.535966 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:14:07.826741 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:07.826851 | orchestrator | 2025-11-11 00:14:07.827435 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-11-11 00:14:07.879627 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:07.879678 | orchestrator | 2025-11-11 00:14:07.879701 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-11-11 00:14:08.340510 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:08.340660 | orchestrator | 2025-11-11 00:14:08.340679 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-11-11 00:14:08.377440 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:08.377515 | orchestrator | 2025-11-11 00:14:08.377529 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-11 00:14:08.707683 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:08.707768 | orchestrator | 2025-11-11 00:14:08.707783 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-11-11 00:14:08.757682 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:08.757748 | orchestrator | 2025-11-11 00:14:08.757759 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-11-11 00:14:09.099619 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:09.099716 | orchestrator | 2025-11-11 00:14:09.099733 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-11-11 00:14:09.238443 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:09.238532 | orchestrator | 2025-11-11 00:14:09.238546 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-11-11 00:14:09.238558 | orchestrator | 2025-11-11 00:14:09.238606 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:14:10.869622 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:10.869716 | orchestrator | 2025-11-11 00:14:10.869733 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-11-11 00:14:10.962313 | orchestrator | included: osism.services.traefik for testbed-manager 2025-11-11 00:14:10.962390 | orchestrator | 2025-11-11 00:14:10.962403 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-11-11 00:14:11.017624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-11-11 00:14:11.017654 | orchestrator | 2025-11-11 00:14:11.017666 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-11-11 00:14:12.120437 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-11-11 00:14:12.120524 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-11-11 00:14:12.120537 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-11-11 00:14:12.120549 | orchestrator | 2025-11-11 00:14:12.120561 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-11-11 00:14:13.927643 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-11-11 00:14:13.927749 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-11-11 00:14:13.927769 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-11-11 00:14:13.927782 | orchestrator | 2025-11-11 00:14:13.927794 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-11-11 00:14:14.559854 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:14:14.559949 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:14.559964 | orchestrator | 2025-11-11 00:14:14.559977 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-11-11 00:14:15.226806 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:14:15.226897 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:15.226913 | orchestrator | 2025-11-11 00:14:15.226925 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-11-11 00:14:15.283328 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:15.283388 | orchestrator | 2025-11-11 00:14:15.283405 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-11-11 00:14:15.640990 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:15.641093 | orchestrator | 2025-11-11 00:14:15.641118 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-11-11 00:14:15.716843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-11-11 00:14:15.716930 | orchestrator | 2025-11-11 00:14:15.716953 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-11-11 00:14:16.799944 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:16.800030 | orchestrator | 2025-11-11 00:14:16.800045 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-11-11 00:14:17.588785 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:17.588874 | orchestrator | 2025-11-11 00:14:17.588889 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-11-11 00:14:27.958072 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:27.958169 | orchestrator | 2025-11-11 00:14:27.958185 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-11-11 00:14:27.996856 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:27.996904 | orchestrator | 2025-11-11 00:14:27.996918 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-11-11 00:14:27.996929 | orchestrator | 2025-11-11 00:14:27.996940 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:14:29.682843 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:29.682905 | orchestrator | 2025-11-11 00:14:29.682934 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-11-11 00:14:29.798546 | orchestrator | included: osism.services.manager for testbed-manager 2025-11-11 00:14:29.798665 | orchestrator | 2025-11-11 00:14:29.798681 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-11-11 00:14:29.854494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-11-11 00:14:29.854597 | orchestrator | 2025-11-11 00:14:29.854616 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-11-11 00:14:32.333130 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:32.333223 | orchestrator | 2025-11-11 00:14:32.333238 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-11-11 00:14:32.385640 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:32.385701 | orchestrator | 2025-11-11 00:14:32.385717 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-11-11 00:14:32.514699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-11-11 00:14:32.514730 | orchestrator | 2025-11-11 00:14:32.514742 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-11-11 00:14:35.312867 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-11-11 00:14:35.312954 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-11-11 00:14:35.312969 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-11-11 00:14:35.312981 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-11-11 00:14:35.312992 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-11-11 00:14:35.313003 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-11-11 00:14:35.313014 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-11-11 00:14:35.313026 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-11-11 00:14:35.313037 | orchestrator | 2025-11-11 00:14:35.313049 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-11-11 00:14:35.913722 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:35.913805 | orchestrator | 2025-11-11 00:14:35.913819 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-11-11 00:14:36.513634 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:36.513698 | orchestrator | 2025-11-11 00:14:36.513714 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-11-11 00:14:36.594302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-11-11 00:14:36.594353 | orchestrator | 2025-11-11 00:14:36.594365 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-11-11 00:14:37.763597 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-11-11 00:14:37.763692 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-11-11 00:14:37.763708 | orchestrator | 2025-11-11 00:14:37.763720 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-11-11 00:14:38.380025 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:38.380109 | orchestrator | 2025-11-11 00:14:38.380124 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-11-11 00:14:38.435266 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:38.435296 | orchestrator | 2025-11-11 00:14:38.435308 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-11-11 00:14:38.510976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-11-11 00:14:38.511018 | orchestrator | 2025-11-11 00:14:38.511030 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-11-11 00:14:39.141531 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:39.141669 | orchestrator | 2025-11-11 00:14:39.141685 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-11-11 00:14:39.197802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-11-11 00:14:39.197912 | orchestrator | 2025-11-11 00:14:39.197927 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-11-11 00:14:40.520771 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:14:40.520855 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:14:40.520869 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:40.520883 | orchestrator | 2025-11-11 00:14:40.520895 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-11-11 00:14:41.157414 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:41.157479 | orchestrator | 2025-11-11 00:14:41.157489 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-11-11 00:14:41.217388 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:41.217449 | orchestrator | 2025-11-11 00:14:41.217459 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-11-11 00:14:41.302588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-11-11 00:14:41.302653 | orchestrator | 2025-11-11 00:14:41.302662 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-11-11 00:14:41.806326 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:41.806397 | orchestrator | 2025-11-11 00:14:41.806407 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-11-11 00:14:42.205380 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:42.205443 | orchestrator | 2025-11-11 00:14:42.205453 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-11-11 00:14:43.459893 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-11-11 00:14:43.459980 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-11-11 00:14:43.459995 | orchestrator | 2025-11-11 00:14:43.460007 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-11-11 00:14:44.070680 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:44.070777 | orchestrator | 2025-11-11 00:14:44.070794 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-11-11 00:14:44.448537 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:44.448677 | orchestrator | 2025-11-11 00:14:44.448694 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-11-11 00:14:44.795336 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:44.795420 | orchestrator | 2025-11-11 00:14:44.795434 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-11-11 00:14:44.844830 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:44.844917 | orchestrator | 2025-11-11 00:14:44.844934 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-11-11 00:14:44.913526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-11-11 00:14:44.913641 | orchestrator | 2025-11-11 00:14:44.913655 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-11-11 00:14:44.953831 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:44.953876 | orchestrator | 2025-11-11 00:14:44.953889 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-11-11 00:14:46.911574 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-11-11 00:14:46.911668 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-11-11 00:14:46.911684 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-11-11 00:14:46.911697 | orchestrator | 2025-11-11 00:14:46.911708 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-11-11 00:14:47.622635 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:47.622730 | orchestrator | 2025-11-11 00:14:47.622749 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-11-11 00:14:48.331617 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:48.331701 | orchestrator | 2025-11-11 00:14:48.331715 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-11-11 00:14:49.031984 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:49.032067 | orchestrator | 2025-11-11 00:14:49.032081 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-11-11 00:14:49.100815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-11-11 00:14:49.100896 | orchestrator | 2025-11-11 00:14:49.100910 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-11-11 00:14:49.143824 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:49.143875 | orchestrator | 2025-11-11 00:14:49.143891 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-11-11 00:14:49.810515 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-11-11 00:14:49.810650 | orchestrator | 2025-11-11 00:14:49.810666 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-11-11 00:14:49.886506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-11-11 00:14:49.886593 | orchestrator | 2025-11-11 00:14:49.886606 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-11-11 00:14:50.584079 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:50.584155 | orchestrator | 2025-11-11 00:14:50.584168 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-11-11 00:14:51.161368 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:51.161450 | orchestrator | 2025-11-11 00:14:51.161465 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-11-11 00:14:51.216015 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:14:51.216075 | orchestrator | 2025-11-11 00:14:51.216088 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-11-11 00:14:51.260524 | orchestrator | ok: [testbed-manager] 2025-11-11 00:14:51.260620 | orchestrator | 2025-11-11 00:14:51.260634 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-11-11 00:14:52.040365 | orchestrator | changed: [testbed-manager] 2025-11-11 00:14:52.040447 | orchestrator | 2025-11-11 00:14:52.040462 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-11-11 00:15:59.309209 | orchestrator | changed: [testbed-manager] 2025-11-11 00:15:59.309321 | orchestrator | 2025-11-11 00:15:59.309338 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-11-11 00:16:00.272402 | orchestrator | ok: [testbed-manager] 2025-11-11 00:16:00.272517 | orchestrator | 2025-11-11 00:16:00.272534 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-11-11 00:16:00.326908 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:16:00.326967 | orchestrator | 2025-11-11 00:16:00.326983 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-11-11 00:16:07.806421 | orchestrator | changed: [testbed-manager] 2025-11-11 00:16:07.806550 | orchestrator | 2025-11-11 00:16:07.806568 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-11-11 00:16:07.866645 | orchestrator | ok: [testbed-manager] 2025-11-11 00:16:07.866691 | orchestrator | 2025-11-11 00:16:07.866705 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-11 00:16:07.866717 | orchestrator | 2025-11-11 00:16:07.866728 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-11-11 00:16:07.909153 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:16:07.909197 | orchestrator | 2025-11-11 00:16:07.909214 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-11-11 00:17:07.968772 | orchestrator | Pausing for 60 seconds 2025-11-11 00:17:07.968876 | orchestrator | changed: [testbed-manager] 2025-11-11 00:17:07.968891 | orchestrator | 2025-11-11 00:17:07.968904 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-11-11 00:17:13.620495 | orchestrator | changed: [testbed-manager] 2025-11-11 00:17:13.620594 | orchestrator | 2025-11-11 00:17:13.620613 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-11-11 00:17:55.108820 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-11-11 00:17:55.108929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-11-11 00:17:55.108945 | orchestrator | changed: [testbed-manager] 2025-11-11 00:17:55.108984 | orchestrator | 2025-11-11 00:17:55.108996 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-11-11 00:18:04.899467 | orchestrator | changed: [testbed-manager] 2025-11-11 00:18:04.899574 | orchestrator | 2025-11-11 00:18:04.899592 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-11-11 00:18:04.971460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-11-11 00:18:04.971492 | orchestrator | 2025-11-11 00:18:04.971504 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-11 00:18:04.971515 | orchestrator | 2025-11-11 00:18:04.971527 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-11-11 00:18:05.023462 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:18:05.023517 | orchestrator | 2025-11-11 00:18:05.023530 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-11-11 00:18:05.087947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-11-11 00:18:05.088005 | orchestrator | 2025-11-11 00:18:05.088018 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-11-11 00:18:05.888761 | orchestrator | changed: [testbed-manager] 2025-11-11 00:18:05.888848 | orchestrator | 2025-11-11 00:18:05.888862 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-11-11 00:18:09.388934 | orchestrator | ok: [testbed-manager] 2025-11-11 00:18:09.389049 | orchestrator | 2025-11-11 00:18:09.389065 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-11-11 00:18:09.464153 | orchestrator | ok: [testbed-manager] => { 2025-11-11 00:18:09.464203 | orchestrator | "version_check_result.stdout_lines": [ 2025-11-11 00:18:09.464216 | orchestrator | "=== OSISM Container Version Check ===", 2025-11-11 00:18:09.464227 | orchestrator | "Checking running containers against expected versions...", 2025-11-11 00:18:09.464238 | orchestrator | "", 2025-11-11 00:18:09.464248 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-11-11 00:18:09.464259 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-11 00:18:09.464268 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464278 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-11 00:18:09.464288 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464298 | orchestrator | "", 2025-11-11 00:18:09.464308 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-11-11 00:18:09.464318 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-11-11 00:18:09.464328 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464337 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-11-11 00:18:09.464347 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464357 | orchestrator | "", 2025-11-11 00:18:09.464367 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-11-11 00:18:09.464422 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-11 00:18:09.464433 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464443 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-11 00:18:09.464453 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464462 | orchestrator | "", 2025-11-11 00:18:09.464472 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-11-11 00:18:09.464482 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-11 00:18:09.464492 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464502 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-11 00:18:09.464512 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464522 | orchestrator | "", 2025-11-11 00:18:09.464532 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-11-11 00:18:09.464541 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-11 00:18:09.464576 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464586 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-11 00:18:09.464595 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464605 | orchestrator | "", 2025-11-11 00:18:09.464615 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-11-11 00:18:09.464624 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.464634 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464644 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.464653 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464663 | orchestrator | "", 2025-11-11 00:18:09.464673 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-11-11 00:18:09.464682 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-11 00:18:09.464692 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464701 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-11 00:18:09.464711 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464720 | orchestrator | "", 2025-11-11 00:18:09.464730 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-11-11 00:18:09.464745 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-11 00:18:09.464756 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464767 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-11 00:18:09.464778 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464789 | orchestrator | "", 2025-11-11 00:18:09.464800 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-11-11 00:18:09.464811 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-11-11 00:18:09.464822 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464838 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-11-11 00:18:09.464849 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464860 | orchestrator | "", 2025-11-11 00:18:09.464871 | orchestrator | "Checking service: redis (Redis Cache)", 2025-11-11 00:18:09.464882 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-11 00:18:09.464894 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464905 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-11 00:18:09.464915 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464926 | orchestrator | "", 2025-11-11 00:18:09.464936 | orchestrator | "Checking service: api (OSISM API Service)", 2025-11-11 00:18:09.464947 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.464957 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.464968 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.464979 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.464989 | orchestrator | "", 2025-11-11 00:18:09.465000 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-11-11 00:18:09.465011 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465021 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.465032 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465042 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.465053 | orchestrator | "", 2025-11-11 00:18:09.465064 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-11-11 00:18:09.465075 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465085 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.465096 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465105 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.465115 | orchestrator | "", 2025-11-11 00:18:09.465124 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-11-11 00:18:09.465134 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465143 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.465152 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465169 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.465179 | orchestrator | "", 2025-11-11 00:18:09.465188 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-11-11 00:18:09.465214 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465224 | orchestrator | " Enabled: true", 2025-11-11 00:18:09.465234 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-11 00:18:09.465243 | orchestrator | " Status: ✅ MATCH", 2025-11-11 00:18:09.465253 | orchestrator | "", 2025-11-11 00:18:09.465263 | orchestrator | "=== Summary ===", 2025-11-11 00:18:09.465272 | orchestrator | "Errors (version mismatches): 0", 2025-11-11 00:18:09.465282 | orchestrator | "Warnings (expected containers not running): 0", 2025-11-11 00:18:09.465292 | orchestrator | "", 2025-11-11 00:18:09.465301 | orchestrator | "✅ All running containers match expected versions!" 2025-11-11 00:18:09.465311 | orchestrator | ] 2025-11-11 00:18:09.465320 | orchestrator | } 2025-11-11 00:18:09.465330 | orchestrator | 2025-11-11 00:18:09.465340 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-11-11 00:18:09.514648 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:18:09.514670 | orchestrator | 2025-11-11 00:18:09.514680 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:18:09.514690 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-11 00:18:09.514700 | orchestrator | 2025-11-11 00:18:09.613163 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-11 00:18:09.613222 | orchestrator | + deactivate 2025-11-11 00:18:09.613236 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-11-11 00:18:09.613248 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-11 00:18:09.613258 | orchestrator | + export PATH 2025-11-11 00:18:09.613271 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-11-11 00:18:09.613289 | orchestrator | + '[' -n '' ']' 2025-11-11 00:18:09.613304 | orchestrator | + hash -r 2025-11-11 00:18:09.613320 | orchestrator | + '[' -n '' ']' 2025-11-11 00:18:09.613338 | orchestrator | + unset VIRTUAL_ENV 2025-11-11 00:18:09.613354 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-11-11 00:18:09.613369 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-11-11 00:18:09.613420 | orchestrator | + unset -f deactivate 2025-11-11 00:18:09.613431 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-11-11 00:18:09.621586 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-11 00:18:09.621671 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-11 00:18:09.621684 | orchestrator | + local max_attempts=60 2025-11-11 00:18:09.621696 | orchestrator | + local name=ceph-ansible 2025-11-11 00:18:09.621706 | orchestrator | + local attempt_num=1 2025-11-11 00:18:09.622342 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:18:09.654416 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:18:09.654478 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-11 00:18:09.654491 | orchestrator | + local max_attempts=60 2025-11-11 00:18:09.654502 | orchestrator | + local name=kolla-ansible 2025-11-11 00:18:09.654512 | orchestrator | + local attempt_num=1 2025-11-11 00:18:09.654845 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-11 00:18:09.684756 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:18:09.684779 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-11 00:18:09.684789 | orchestrator | + local max_attempts=60 2025-11-11 00:18:09.684799 | orchestrator | + local name=osism-ansible 2025-11-11 00:18:09.684808 | orchestrator | + local attempt_num=1 2025-11-11 00:18:09.685397 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-11 00:18:09.714983 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:18:09.715029 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-11 00:18:09.715040 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-11 00:18:10.405221 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-11-11 00:18:10.608676 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-11-11 00:18:10.608812 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-11-11 00:18:10.608827 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-11-11 00:18:10.608839 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-11-11 00:18:10.608851 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-11-11 00:18:10.608863 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2025-11-11 00:18:10.608894 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2025-11-11 00:18:10.608906 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2025-11-11 00:18:10.608917 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2025-11-11 00:18:10.608928 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2025-11-11 00:18:10.608938 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2025-11-11 00:18:10.608949 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2025-11-11 00:18:10.608960 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-11-11 00:18:10.608970 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-11-11 00:18:10.608981 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-11-11 00:18:10.608992 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2025-11-11 00:18:10.615781 | orchestrator | ++ semver latest 7.0.0 2025-11-11 00:18:10.659148 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-11 00:18:10.659195 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-11 00:18:10.659212 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-11-11 00:18:10.662475 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-11-11 00:18:22.905438 | orchestrator | 2025-11-11 00:18:22 | INFO  | Task 4b7fffb1-c9ae-41f9-9050-45c554459c01 (resolvconf) was prepared for execution. 2025-11-11 00:18:22.905543 | orchestrator | 2025-11-11 00:18:22 | INFO  | It takes a moment until task 4b7fffb1-c9ae-41f9-9050-45c554459c01 (resolvconf) has been started and output is visible here. 2025-11-11 00:18:36.405001 | orchestrator | 2025-11-11 00:18:36.405103 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-11-11 00:18:36.405119 | orchestrator | 2025-11-11 00:18:36.405131 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:18:36.405143 | orchestrator | Tuesday 11 November 2025 00:18:26 +0000 (0:00:00.137) 0:00:00.137 ****** 2025-11-11 00:18:36.405154 | orchestrator | ok: [testbed-manager] 2025-11-11 00:18:36.405166 | orchestrator | 2025-11-11 00:18:36.405177 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-11 00:18:36.405188 | orchestrator | Tuesday 11 November 2025 00:18:30 +0000 (0:00:03.629) 0:00:03.767 ****** 2025-11-11 00:18:36.405199 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:18:36.405211 | orchestrator | 2025-11-11 00:18:36.405222 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-11 00:18:36.405233 | orchestrator | Tuesday 11 November 2025 00:18:30 +0000 (0:00:00.069) 0:00:03.836 ****** 2025-11-11 00:18:36.405252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-11-11 00:18:36.405265 | orchestrator | 2025-11-11 00:18:36.405276 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-11 00:18:36.405287 | orchestrator | Tuesday 11 November 2025 00:18:30 +0000 (0:00:00.083) 0:00:03.920 ****** 2025-11-11 00:18:36.405298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-11-11 00:18:36.405309 | orchestrator | 2025-11-11 00:18:36.405319 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-11 00:18:36.405330 | orchestrator | Tuesday 11 November 2025 00:18:30 +0000 (0:00:00.073) 0:00:03.994 ****** 2025-11-11 00:18:36.405341 | orchestrator | ok: [testbed-manager] 2025-11-11 00:18:36.405391 | orchestrator | 2025-11-11 00:18:36.405403 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-11 00:18:36.405414 | orchestrator | Tuesday 11 November 2025 00:18:31 +0000 (0:00:01.038) 0:00:05.032 ****** 2025-11-11 00:18:36.405424 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:18:36.405436 | orchestrator | 2025-11-11 00:18:36.405446 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-11 00:18:36.405457 | orchestrator | Tuesday 11 November 2025 00:18:31 +0000 (0:00:00.070) 0:00:05.102 ****** 2025-11-11 00:18:36.405468 | orchestrator | ok: [testbed-manager] 2025-11-11 00:18:36.405478 | orchestrator | 2025-11-11 00:18:36.405489 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-11 00:18:36.405500 | orchestrator | Tuesday 11 November 2025 00:18:32 +0000 (0:00:00.491) 0:00:05.594 ****** 2025-11-11 00:18:36.405511 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:18:36.405521 | orchestrator | 2025-11-11 00:18:36.405532 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-11 00:18:36.405545 | orchestrator | Tuesday 11 November 2025 00:18:32 +0000 (0:00:00.093) 0:00:05.687 ****** 2025-11-11 00:18:36.405557 | orchestrator | changed: [testbed-manager] 2025-11-11 00:18:36.405569 | orchestrator | 2025-11-11 00:18:36.405581 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-11 00:18:36.405594 | orchestrator | Tuesday 11 November 2025 00:18:32 +0000 (0:00:00.501) 0:00:06.189 ****** 2025-11-11 00:18:36.405605 | orchestrator | changed: [testbed-manager] 2025-11-11 00:18:36.405617 | orchestrator | 2025-11-11 00:18:36.405629 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-11 00:18:36.405640 | orchestrator | Tuesday 11 November 2025 00:18:33 +0000 (0:00:01.035) 0:00:07.225 ****** 2025-11-11 00:18:36.405652 | orchestrator | ok: [testbed-manager] 2025-11-11 00:18:36.405664 | orchestrator | 2025-11-11 00:18:36.405675 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-11 00:18:36.405706 | orchestrator | Tuesday 11 November 2025 00:18:34 +0000 (0:00:00.941) 0:00:08.166 ****** 2025-11-11 00:18:36.405718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-11-11 00:18:36.405730 | orchestrator | 2025-11-11 00:18:36.405742 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-11 00:18:36.405753 | orchestrator | Tuesday 11 November 2025 00:18:35 +0000 (0:00:00.065) 0:00:08.232 ****** 2025-11-11 00:18:36.405765 | orchestrator | changed: [testbed-manager] 2025-11-11 00:18:36.405777 | orchestrator | 2025-11-11 00:18:36.405789 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:18:36.405800 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-11 00:18:36.405812 | orchestrator | 2025-11-11 00:18:36.405825 | orchestrator | 2025-11-11 00:18:36.405836 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:18:36.405848 | orchestrator | Tuesday 11 November 2025 00:18:36 +0000 (0:00:01.178) 0:00:09.410 ****** 2025-11-11 00:18:36.405860 | orchestrator | =============================================================================== 2025-11-11 00:18:36.405872 | orchestrator | Gathering Facts --------------------------------------------------------- 3.63s 2025-11-11 00:18:36.405884 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2025-11-11 00:18:36.405896 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2025-11-11 00:18:36.405908 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-11-11 00:18:36.405919 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-11-11 00:18:36.405930 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-11-11 00:18:36.405957 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-11-11 00:18:36.405968 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-11-11 00:18:36.405985 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-11-11 00:18:36.405996 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-11-11 00:18:36.406007 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-11-11 00:18:36.406064 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-11-11 00:18:36.406077 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-11-11 00:18:36.657554 | orchestrator | + osism apply sshconfig 2025-11-11 00:18:48.722597 | orchestrator | 2025-11-11 00:18:48 | INFO  | Task 29c14c6c-7b15-4450-8e15-8f12ab5cfcea (sshconfig) was prepared for execution. 2025-11-11 00:18:48.722718 | orchestrator | 2025-11-11 00:18:48 | INFO  | It takes a moment until task 29c14c6c-7b15-4450-8e15-8f12ab5cfcea (sshconfig) has been started and output is visible here. 2025-11-11 00:19:00.149906 | orchestrator | 2025-11-11 00:19:00.150011 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-11-11 00:19:00.150082 | orchestrator | 2025-11-11 00:19:00.150095 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-11-11 00:19:00.150106 | orchestrator | Tuesday 11 November 2025 00:18:52 +0000 (0:00:00.153) 0:00:00.153 ****** 2025-11-11 00:19:00.150117 | orchestrator | ok: [testbed-manager] 2025-11-11 00:19:00.150129 | orchestrator | 2025-11-11 00:19:00.150140 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-11-11 00:19:00.150151 | orchestrator | Tuesday 11 November 2025 00:18:53 +0000 (0:00:00.610) 0:00:00.764 ****** 2025-11-11 00:19:00.150162 | orchestrator | changed: [testbed-manager] 2025-11-11 00:19:00.150174 | orchestrator | 2025-11-11 00:19:00.150185 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-11-11 00:19:00.150216 | orchestrator | Tuesday 11 November 2025 00:18:53 +0000 (0:00:00.510) 0:00:01.274 ****** 2025-11-11 00:19:00.150228 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-11-11 00:19:00.150239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-11-11 00:19:00.150249 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-11-11 00:19:00.150260 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-11-11 00:19:00.150271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-11-11 00:19:00.150281 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-11-11 00:19:00.150292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-11-11 00:19:00.150302 | orchestrator | 2025-11-11 00:19:00.150313 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-11-11 00:19:00.150323 | orchestrator | Tuesday 11 November 2025 00:18:59 +0000 (0:00:05.487) 0:00:06.762 ****** 2025-11-11 00:19:00.150373 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:19:00.150384 | orchestrator | 2025-11-11 00:19:00.150395 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-11-11 00:19:00.150405 | orchestrator | Tuesday 11 November 2025 00:18:59 +0000 (0:00:00.069) 0:00:06.831 ****** 2025-11-11 00:19:00.150416 | orchestrator | changed: [testbed-manager] 2025-11-11 00:19:00.150427 | orchestrator | 2025-11-11 00:19:00.150437 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:19:00.150449 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:19:00.150462 | orchestrator | 2025-11-11 00:19:00.150475 | orchestrator | 2025-11-11 00:19:00.150488 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:19:00.150501 | orchestrator | Tuesday 11 November 2025 00:18:59 +0000 (0:00:00.561) 0:00:07.393 ****** 2025-11-11 00:19:00.150513 | orchestrator | =============================================================================== 2025-11-11 00:19:00.150526 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.49s 2025-11-11 00:19:00.150538 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2025-11-11 00:19:00.150550 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-11-11 00:19:00.150563 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-11-11 00:19:00.150575 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-11-11 00:19:00.430105 | orchestrator | + osism apply known-hosts 2025-11-11 00:19:12.451971 | orchestrator | 2025-11-11 00:19:12 | INFO  | Task ac648c7a-f569-4d31-95df-a5fdf1b5ab77 (known-hosts) was prepared for execution. 2025-11-11 00:19:12.452069 | orchestrator | 2025-11-11 00:19:12 | INFO  | It takes a moment until task ac648c7a-f569-4d31-95df-a5fdf1b5ab77 (known-hosts) has been started and output is visible here. 2025-11-11 00:19:28.571071 | orchestrator | 2025-11-11 00:19:28.571205 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-11-11 00:19:28.571223 | orchestrator | 2025-11-11 00:19:28.571236 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-11-11 00:19:28.571248 | orchestrator | Tuesday 11 November 2025 00:19:16 +0000 (0:00:00.118) 0:00:00.118 ****** 2025-11-11 00:19:28.571260 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-11 00:19:28.571271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-11 00:19:28.571282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-11 00:19:28.571293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-11 00:19:28.571343 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-11 00:19:28.571356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-11 00:19:28.571367 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-11 00:19:28.571400 | orchestrator | 2025-11-11 00:19:28.571412 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-11-11 00:19:28.571424 | orchestrator | Tuesday 11 November 2025 00:19:22 +0000 (0:00:05.680) 0:00:05.798 ****** 2025-11-11 00:19:28.571436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-11 00:19:28.571449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-11 00:19:28.571460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-11 00:19:28.571471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-11 00:19:28.571481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-11 00:19:28.571492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-11 00:19:28.571503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-11 00:19:28.571514 | orchestrator | 2025-11-11 00:19:28.571525 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571535 | orchestrator | Tuesday 11 November 2025 00:19:22 +0000 (0:00:00.168) 0:00:05.966 ****** 2025-11-11 00:19:28.571549 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUYb2fC7zcutL1yPW8y3sfatPfbFeRvPV4dPssRt+BDXychAnXbNdQwOq4ndDU480sqJsGk0NP7591d7qWw6AFFvaXllZgSyqRz4blnbfXO5zVD28eruivm7h962WkwgOzpRZ92lg0Gu0xXWVzRWb7O3ZeiB54QpJo/jP5a+hRhR7OYWqiXQLhF2IQgFGgOPO7bK72dX6knQpw8d7sv+Ixm0KCJvTlzqtpNqgQY5MuZbN6hZR3/Pwyv5LwlHoenzmPGIrc9erdokh/UBdevyKRksY8ClDrWaqhYsNvmkSfwui+AAtObLocg08HY22WHTFepwcynvf+xS73Z5/lEFhiJXBBAekfxYYFOTVCtTi8U2aHYsTbIDXX6Ty2vqeKv05sLNK1dZdTFogGgQoP9gum4mDMBpk7e9YoD4akD1ZWbp8TxumPXMFzsaJk/gTMedmVRcGAcp8Skcem+Zyd0hCVdrpRwGuOJMeTvcF0h5nFc6SxVMPZIn+1AHXdaAgL8ts=) 2025-11-11 00:19:28.571564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjonb23Y6BXGVn5DGH2GSHpKbBPM0QrXRS4THifgoFMAeq8OILDOWSN9DtBgF8BTM6CB/Vpz0iH4BodAJzoBSc=) 2025-11-11 00:19:28.571577 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID8uNob6qAwZrCx/VQegBZNqMtRuxeXOc+yMEUN5zq7k) 2025-11-11 00:19:28.571590 | orchestrator | 2025-11-11 00:19:28.571601 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571612 | orchestrator | Tuesday 11 November 2025 00:19:23 +0000 (0:00:01.159) 0:00:07.125 ****** 2025-11-11 00:19:28.571642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuWcSI6KI84QArEZx8WiL12LxHa3c5N3wqZNxgp1lRdOF+Rcur0B/6bsAEIa4xxOSzfvoKv2osjb0K/xQT30XYuATEH4/t/5Wj7BmyV/VDSMf4bEYF4RCSzaE3YRAP8cwLa47bOi4WrApFX8jglMKa15V8DEErrX02Z7wV8fdYc294eFBBXYzReskXsINMb4AZ6pOmA2RfjbDxWo+I0oKYf9tQwSGppDBlLwNsS6DB2rzAMQWvyxdHxulrWbHOASf1M/btgRm1y5+Gs8BLALPoJ31tEmGkD2bogOdL81/A1J+oEuC3EFoKaz+gS5FsOjj8a7S4FMG42Li0V6zlreqhtbhP1VrOXNSwdIuzoUyPWEt1DgPt2h18EESORMf/EZorwUiiMApcX/QlYnGUJlZ75CB1BPDNWUSI4NFNl6vjfBNFtM0zhAr4iHJyYBlN3ZnEU9HLUn2twZnnDy/Dll39KPExXPD6IZaqKsukZ//vDvhLwhVyhoZE8U5SIGBu/oc=) 2025-11-11 00:19:28.571655 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMUwrP+1BzwxMqSXZeWW0xiKwNy7utEFs/7oZU7U4CG5BTvHxKzityWsIH7MDOvi8AadmLsfgWHKnIXsjSEW+A=) 2025-11-11 00:19:28.571673 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEpXQSywtLugIkzF5FFgdyY2C9jrKi+fDZxQYdlUwnv) 2025-11-11 00:19:28.571684 | orchestrator | 2025-11-11 00:19:28.571695 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571706 | orchestrator | Tuesday 11 November 2025 00:19:24 +0000 (0:00:01.042) 0:00:08.168 ****** 2025-11-11 00:19:28.571777 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpY52w4LXoCs/P8Dh7/jJBkzEjizOsWsJMTepy/AO72VVa2G1FP0iIjCmfD6McL+/NyL97xTp4Guvyx7awMaix+xcURzO2ceSXVgL64RxHOSOa1lG5z21rs046piOiTmcxHLDwrzZkbxfrUqIxkvs8+uer+uKEzLVQPTr1OWQE6L98ZOOaGMOdoRG29dddKhO6ZuTfjAsnZT9/pmA5iLs8RR3gBaXbmDHPido1tg0ZNkUatJUOb0aruDHRkxZjt/9ICn0pZma2E3k5dRd3LsHA+qlJn9H4HNzv2fJ6rgovQXS9ACmTyMfpa0WEdFa5Ud+ro2AWI+ET+7LIbZgiYHV/vbyF8MBVvkn+saQ4fqw3rXmvZBMfoidC+fgmq7d+xw6YGS1xX8EtGgpDcg5UGzYpmhtPkaaJWI1nAqlgVhQ+YyyG0s6mDHnSFCFoE/RQlb5gF4ITxr2K5MqPrk1PWZSdkVxy0GR/6q+COLiunmFDj7CXgMkbakvhTv/EBGmdJY0=) 2025-11-11 00:19:28.571790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN++E2HMTrFDQRjtLEHL9KqkpttJMc6LdqV3/vZQregRk+bDU7d/mt8iS0bN0QGZE5WkVKTtDKlm+xI2dS1tkio=) 2025-11-11 00:19:28.571801 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKO/p4f+gbbLdBQSkTTCpDyt3rMT1Q//1wMJjNS3+FT5) 2025-11-11 00:19:28.571812 | orchestrator | 2025-11-11 00:19:28.571823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571834 | orchestrator | Tuesday 11 November 2025 00:19:25 +0000 (0:00:01.040) 0:00:09.208 ****** 2025-11-11 00:19:28.571845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINBrOT1Ucgwtr/EIX0uTygWU5ybkTE97zsKkUODwq6Us) 2025-11-11 00:19:28.571856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjjR/yA+OwmhY76x36W/E2KwnDQuui2vz1OjLt/gbJ/W/fxG5KtnHRohz85fA8M0p2XU2vdsaGHc+4eXytr/bJY9bRxwpjZtYw6bd6IQwlTX3In/CaiOtHm10wILdveRmlPI9zvwn+Zw88Uuz3GMrE+hjyHRAvc8nk+bmfBozvoaQelk05suoUyXBQ28M3Y2gPI2bq7hUdW9kVmyolSOhhaWYRn081hGVUr+Hd0JV3DKa9Sf3yyT0qzfLehjZu6n3r3IwhnrCzEC36xlQO/eqyQKx6fjYS8B6Ofcp3MgtKquBX8zZ+0BobdWBm8zeZeGoXHIHK2Y8Z4XuWX2F6y+gqqVPGE8bowjaV80/Ttfig9bxsh4savvoF783egJ7NDfWBpWODLL31UjRYL3nTlSsvrr64EvRcfs3NOmehlu7CZOY6WLlkIZccWQ8eDbYNmtQLX+mjof9G/U6jd9n8AJIhS8xBpUN4ybOvYxLYerc3HL2BXhTa/nQFgc5REkYJtgs=) 2025-11-11 00:19:28.571868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH3zGwzHRpvTomfBt9IGpeyC5yJ5ue7/08nzMp2hn69NQT21lpsUozcf5KoC5StQPXJqc+q2EUy9YqYcQ7u8Quo=) 2025-11-11 00:19:28.571879 | orchestrator | 2025-11-11 00:19:28.571890 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571901 | orchestrator | Tuesday 11 November 2025 00:19:26 +0000 (0:00:01.020) 0:00:10.228 ****** 2025-11-11 00:19:28.571912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFOXHZzzPrcQNNKcUOTyDeCqSzoOcnBgiI8dpPgTEsSaEiOl3N+pLw0/mLI9+6k2xtABZ1ZcU5bzn85tKA6NmqY8kiz3ATpc5qn6i0S+tH5nofAufm/JDZO882VmrkoPgl7DOhQrX3PW8h2t9gbUzitQH4KMTKDDIOQpKIdRYhq0GecPWavN7u9EFBxh7B0ZKpAHwIACx+Rz/hyrDUtGmMnMfv5qQFYi3NzKPKwgixSuW15uqimOihqpL2cLEK82xAoPakhDjM5E04FVZ3DYKGGg9vcTUCXh09wcim1cuZSlaYCORplOjgp0n6pZN//dMShjM9btueZZ6fqVAKiUTMgrzwbL7vBdM4YSzZPfr9e4idMRXODxf31qpbiLvLj3vv4/R1wclyNTduDcgkBRjqlQPJH5AKHGDT4DOR1LoOcGtOyHKV1FJVA7QoUkuaQMJxTEgwOEJARC6NR+N2JeVHnT+CzkEy3RCy73zq5znqS75t5wmmVKjmmBrLg9OHeeU=) 2025-11-11 00:19:28.571924 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtqJax7deJVCkzWV3BlvyiEJ4fS6vujgXKk9wVbmptO3tAPdpwfhdoPVaE18xHPFHKvit51Kx7MeTS9iPbPb8U=) 2025-11-11 00:19:28.571941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDpFXjTROqWKLxxipOj55bYgApsLdXc0K7huyyv59AM) 2025-11-11 00:19:28.571952 | orchestrator | 2025-11-11 00:19:28.571963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:28.571974 | orchestrator | Tuesday 11 November 2025 00:19:27 +0000 (0:00:01.023) 0:00:11.252 ****** 2025-11-11 00:19:28.571994 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQSpgAXmvUvtEFUVPSM4GjY7IEGWf+ON08D9TXJAnVDNagsvEXLo7gdengnaExWulLeh9F37Mxrm76qBPBFE9IJojLbgWy/FyH67oo6ojBVVTVr5idweYheYTKkaCm/QAR+XHB5JLDDOPbPH7uVnKYc6/n95hhKwBIUqKFMHS2s3ZGNeafTrFOaj4FOBlYRRsHkWBT4QDSRT07eOoXVH/AHjL6EGZN4ObEspQ/nM1V6L2kXJXiUo2Vpx+SPkM5qSapmGQDBuQPQn0HjnetgwRuHsOoUMEOV5yXfHoLtLZ4v4oftZkGFGl9MX4vYPgBhNFaMpxCKmRXUxFdXiYUJYPJLLgmDHr+jpT3mwjQTCdRaQxzUDeRwx7UDMFeOoILJGl7Kaw4zt/pcQT8vZ6WtNe8mWCHRl3llWrcIPwWIqbtbA3TB9tPAMFk6cMmpw+uhxQ6g/J1OSnTskMmiMk1dn+jntI0vtI//V37wBUy6Z9dhrUncBEY/MiJvxYvzellNVk=) 2025-11-11 00:19:38.933508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDc7aoo6hnYOZOVLPtaM+mXxwpzV1g5A2YEsx8ut45vSnASENA2P15xe2U0PjlEODwuEyGXKEcflqw1wolO59g=) 2025-11-11 00:19:38.933662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMaOlZlSE48fQ2B6904PvI2k0t/IF/A6YzxTgyFlPDB2) 2025-11-11 00:19:38.933740 | orchestrator | 2025-11-11 00:19:38.933758 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:38.933771 | orchestrator | Tuesday 11 November 2025 00:19:28 +0000 (0:00:00.998) 0:00:12.250 ****** 2025-11-11 00:19:38.933784 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEaEalgybBsRwrHi+3x3Hg4bbs3oxq9+qAHfEH/Wcbq6) 2025-11-11 00:19:38.933798 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6yb5kqblEBlJt7vVGRgosuBsOWf76jqkz/HFsyq1ENTzi2GH0XHrpIHoV/s0nRJBHTWaQp8/AgDs9ywUC2JBGIYE7wxp6qyF5k20+BKo8lpjmKZOf286nO94fHJpQieOsHtN30IhJL5ZukYKhrgHtnnKmgfPUhXYz8kV/9n6lNwzdMt3J43thceshUpdfdYxSAhryH60CJFOhO402QsHx3iiz+ewoK/KgF9Fkf0hXBxPRxrPf9gz9z5AqRSyBFuP0d1vi/keSPU1/PXW/5TodqJ5tVR0uCsbwE/QN/4cvoDs6JVcyYidQ8dgzPYZD8SSOheveZTPHF9e+q88ZIKxRn8vvpgfneOI8UGBvUn22K1XcsNPqraV7XlqQ1RY1bmpkDiNcAWYtDGHb1py6Qb/mauTVJH6u1b4lUQ9/oT/YMIrrIsx9GZNML/uvE0+8eLtbE5F2kEkR8ksuJnlU2Kt7CpnjUVZCiJ207qDpacfyG0nEAbCNWGlZD8gDa661GkU=) 2025-11-11 00:19:38.933812 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWF8dDA+Cfq02Y5hK6TPkgdie4YsS7U3UhVZq1JYi4qvGU5251XS3DC1/7/WRdh1WLyHDLJY1q/Zde0cP5+8fw=) 2025-11-11 00:19:38.933824 | orchestrator | 2025-11-11 00:19:38.933857 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-11-11 00:19:38.933869 | orchestrator | Tuesday 11 November 2025 00:19:29 +0000 (0:00:01.036) 0:00:13.287 ****** 2025-11-11 00:19:38.933885 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-11 00:19:38.933897 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-11 00:19:38.933908 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-11 00:19:38.933919 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-11 00:19:38.933930 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-11 00:19:38.933941 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-11 00:19:38.933951 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-11 00:19:38.933962 | orchestrator | 2025-11-11 00:19:38.933973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-11-11 00:19:38.933985 | orchestrator | Tuesday 11 November 2025 00:19:34 +0000 (0:00:05.058) 0:00:18.345 ****** 2025-11-11 00:19:38.933997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-11 00:19:38.934084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-11 00:19:38.934099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-11 00:19:38.934112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-11 00:19:38.934124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-11 00:19:38.934135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-11 00:19:38.934148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-11 00:19:38.934168 | orchestrator | 2025-11-11 00:19:38.934180 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:38.934193 | orchestrator | Tuesday 11 November 2025 00:19:34 +0000 (0:00:00.166) 0:00:18.511 ****** 2025-11-11 00:19:38.934206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjonb23Y6BXGVn5DGH2GSHpKbBPM0QrXRS4THifgoFMAeq8OILDOWSN9DtBgF8BTM6CB/Vpz0iH4BodAJzoBSc=) 2025-11-11 00:19:38.934239 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID8uNob6qAwZrCx/VQegBZNqMtRuxeXOc+yMEUN5zq7k) 2025-11-11 00:19:38.934254 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUYb2fC7zcutL1yPW8y3sfatPfbFeRvPV4dPssRt+BDXychAnXbNdQwOq4ndDU480sqJsGk0NP7591d7qWw6AFFvaXllZgSyqRz4blnbfXO5zVD28eruivm7h962WkwgOzpRZ92lg0Gu0xXWVzRWb7O3ZeiB54QpJo/jP5a+hRhR7OYWqiXQLhF2IQgFGgOPO7bK72dX6knQpw8d7sv+Ixm0KCJvTlzqtpNqgQY5MuZbN6hZR3/Pwyv5LwlHoenzmPGIrc9erdokh/UBdevyKRksY8ClDrWaqhYsNvmkSfwui+AAtObLocg08HY22WHTFepwcynvf+xS73Z5/lEFhiJXBBAekfxYYFOTVCtTi8U2aHYsTbIDXX6Ty2vqeKv05sLNK1dZdTFogGgQoP9gum4mDMBpk7e9YoD4akD1ZWbp8TxumPXMFzsaJk/gTMedmVRcGAcp8Skcem+Zyd0hCVdrpRwGuOJMeTvcF0h5nFc6SxVMPZIn+1AHXdaAgL8ts=) 2025-11-11 00:19:38.934267 | orchestrator | 2025-11-11 00:19:38.934279 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:38.934291 | orchestrator | Tuesday 11 November 2025 00:19:35 +0000 (0:00:01.044) 0:00:19.556 ****** 2025-11-11 00:19:38.934327 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuWcSI6KI84QArEZx8WiL12LxHa3c5N3wqZNxgp1lRdOF+Rcur0B/6bsAEIa4xxOSzfvoKv2osjb0K/xQT30XYuATEH4/t/5Wj7BmyV/VDSMf4bEYF4RCSzaE3YRAP8cwLa47bOi4WrApFX8jglMKa15V8DEErrX02Z7wV8fdYc294eFBBXYzReskXsINMb4AZ6pOmA2RfjbDxWo+I0oKYf9tQwSGppDBlLwNsS6DB2rzAMQWvyxdHxulrWbHOASf1M/btgRm1y5+Gs8BLALPoJ31tEmGkD2bogOdL81/A1J+oEuC3EFoKaz+gS5FsOjj8a7S4FMG42Li0V6zlreqhtbhP1VrOXNSwdIuzoUyPWEt1DgPt2h18EESORMf/EZorwUiiMApcX/QlYnGUJlZ75CB1BPDNWUSI4NFNl6vjfBNFtM0zhAr4iHJyYBlN3ZnEU9HLUn2twZnnDy/Dll39KPExXPD6IZaqKsukZ//vDvhLwhVyhoZE8U5SIGBu/oc=) 2025-11-11 00:19:38.934340 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMUwrP+1BzwxMqSXZeWW0xiKwNy7utEFs/7oZU7U4CG5BTvHxKzityWsIH7MDOvi8AadmLsfgWHKnIXsjSEW+A=) 2025-11-11 00:19:38.934353 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEpXQSywtLugIkzF5FFgdyY2C9jrKi+fDZxQYdlUwnv) 2025-11-11 00:19:38.934375 | orchestrator | 2025-11-11 00:19:38.934387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:38.934398 | orchestrator | Tuesday 11 November 2025 00:19:36 +0000 (0:00:01.026) 0:00:20.582 ****** 2025-11-11 00:19:38.934410 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpY52w4LXoCs/P8Dh7/jJBkzEjizOsWsJMTepy/AO72VVa2G1FP0iIjCmfD6McL+/NyL97xTp4Guvyx7awMaix+xcURzO2ceSXVgL64RxHOSOa1lG5z21rs046piOiTmcxHLDwrzZkbxfrUqIxkvs8+uer+uKEzLVQPTr1OWQE6L98ZOOaGMOdoRG29dddKhO6ZuTfjAsnZT9/pmA5iLs8RR3gBaXbmDHPido1tg0ZNkUatJUOb0aruDHRkxZjt/9ICn0pZma2E3k5dRd3LsHA+qlJn9H4HNzv2fJ6rgovQXS9ACmTyMfpa0WEdFa5Ud+ro2AWI+ET+7LIbZgiYHV/vbyF8MBVvkn+saQ4fqw3rXmvZBMfoidC+fgmq7d+xw6YGS1xX8EtGgpDcg5UGzYpmhtPkaaJWI1nAqlgVhQ+YyyG0s6mDHnSFCFoE/RQlb5gF4ITxr2K5MqPrk1PWZSdkVxy0GR/6q+COLiunmFDj7CXgMkbakvhTv/EBGmdJY0=) 2025-11-11 00:19:38.934422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN++E2HMTrFDQRjtLEHL9KqkpttJMc6LdqV3/vZQregRk+bDU7d/mt8iS0bN0QGZE5WkVKTtDKlm+xI2dS1tkio=) 2025-11-11 00:19:38.934433 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKO/p4f+gbbLdBQSkTTCpDyt3rMT1Q//1wMJjNS3+FT5) 2025-11-11 00:19:38.934444 | orchestrator | 2025-11-11 00:19:38.934455 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:38.934466 | orchestrator | Tuesday 11 November 2025 00:19:37 +0000 (0:00:00.992) 0:00:21.575 ****** 2025-11-11 00:19:38.934483 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjjR/yA+OwmhY76x36W/E2KwnDQuui2vz1OjLt/gbJ/W/fxG5KtnHRohz85fA8M0p2XU2vdsaGHc+4eXytr/bJY9bRxwpjZtYw6bd6IQwlTX3In/CaiOtHm10wILdveRmlPI9zvwn+Zw88Uuz3GMrE+hjyHRAvc8nk+bmfBozvoaQelk05suoUyXBQ28M3Y2gPI2bq7hUdW9kVmyolSOhhaWYRn081hGVUr+Hd0JV3DKa9Sf3yyT0qzfLehjZu6n3r3IwhnrCzEC36xlQO/eqyQKx6fjYS8B6Ofcp3MgtKquBX8zZ+0BobdWBm8zeZeGoXHIHK2Y8Z4XuWX2F6y+gqqVPGE8bowjaV80/Ttfig9bxsh4savvoF783egJ7NDfWBpWODLL31UjRYL3nTlSsvrr64EvRcfs3NOmehlu7CZOY6WLlkIZccWQ8eDbYNmtQLX+mjof9G/U6jd9n8AJIhS8xBpUN4ybOvYxLYerc3HL2BXhTa/nQFgc5REkYJtgs=) 2025-11-11 00:19:38.934496 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH3zGwzHRpvTomfBt9IGpeyC5yJ5ue7/08nzMp2hn69NQT21lpsUozcf5KoC5StQPXJqc+q2EUy9YqYcQ7u8Quo=) 2025-11-11 00:19:38.934521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINBrOT1Ucgwtr/EIX0uTygWU5ybkTE97zsKkUODwq6Us) 2025-11-11 00:19:43.178894 | orchestrator | 2025-11-11 00:19:43.179007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:43.179022 | orchestrator | Tuesday 11 November 2025 00:19:38 +0000 (0:00:01.037) 0:00:22.613 ****** 2025-11-11 00:19:43.179035 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFOXHZzzPrcQNNKcUOTyDeCqSzoOcnBgiI8dpPgTEsSaEiOl3N+pLw0/mLI9+6k2xtABZ1ZcU5bzn85tKA6NmqY8kiz3ATpc5qn6i0S+tH5nofAufm/JDZO882VmrkoPgl7DOhQrX3PW8h2t9gbUzitQH4KMTKDDIOQpKIdRYhq0GecPWavN7u9EFBxh7B0ZKpAHwIACx+Rz/hyrDUtGmMnMfv5qQFYi3NzKPKwgixSuW15uqimOihqpL2cLEK82xAoPakhDjM5E04FVZ3DYKGGg9vcTUCXh09wcim1cuZSlaYCORplOjgp0n6pZN//dMShjM9btueZZ6fqVAKiUTMgrzwbL7vBdM4YSzZPfr9e4idMRXODxf31qpbiLvLj3vv4/R1wclyNTduDcgkBRjqlQPJH5AKHGDT4DOR1LoOcGtOyHKV1FJVA7QoUkuaQMJxTEgwOEJARC6NR+N2JeVHnT+CzkEy3RCy73zq5znqS75t5wmmVKjmmBrLg9OHeeU=) 2025-11-11 00:19:43.179051 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtqJax7deJVCkzWV3BlvyiEJ4fS6vujgXKk9wVbmptO3tAPdpwfhdoPVaE18xHPFHKvit51Kx7MeTS9iPbPb8U=) 2025-11-11 00:19:43.179063 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDpFXjTROqWKLxxipOj55bYgApsLdXc0K7huyyv59AM) 2025-11-11 00:19:43.179075 | orchestrator | 2025-11-11 00:19:43.179086 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:43.179124 | orchestrator | Tuesday 11 November 2025 00:19:39 +0000 (0:00:01.028) 0:00:23.642 ****** 2025-11-11 00:19:43.179152 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQSpgAXmvUvtEFUVPSM4GjY7IEGWf+ON08D9TXJAnVDNagsvEXLo7gdengnaExWulLeh9F37Mxrm76qBPBFE9IJojLbgWy/FyH67oo6ojBVVTVr5idweYheYTKkaCm/QAR+XHB5JLDDOPbPH7uVnKYc6/n95hhKwBIUqKFMHS2s3ZGNeafTrFOaj4FOBlYRRsHkWBT4QDSRT07eOoXVH/AHjL6EGZN4ObEspQ/nM1V6L2kXJXiUo2Vpx+SPkM5qSapmGQDBuQPQn0HjnetgwRuHsOoUMEOV5yXfHoLtLZ4v4oftZkGFGl9MX4vYPgBhNFaMpxCKmRXUxFdXiYUJYPJLLgmDHr+jpT3mwjQTCdRaQxzUDeRwx7UDMFeOoILJGl7Kaw4zt/pcQT8vZ6WtNe8mWCHRl3llWrcIPwWIqbtbA3TB9tPAMFk6cMmpw+uhxQ6g/J1OSnTskMmiMk1dn+jntI0vtI//V37wBUy6Z9dhrUncBEY/MiJvxYvzellNVk=) 2025-11-11 00:19:43.179164 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDc7aoo6hnYOZOVLPtaM+mXxwpzV1g5A2YEsx8ut45vSnASENA2P15xe2U0PjlEODwuEyGXKEcflqw1wolO59g=) 2025-11-11 00:19:43.179175 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMaOlZlSE48fQ2B6904PvI2k0t/IF/A6YzxTgyFlPDB2) 2025-11-11 00:19:43.179186 | orchestrator | 2025-11-11 00:19:43.179196 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-11 00:19:43.179207 | orchestrator | Tuesday 11 November 2025 00:19:40 +0000 (0:00:01.024) 0:00:24.666 ****** 2025-11-11 00:19:43.179218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWF8dDA+Cfq02Y5hK6TPkgdie4YsS7U3UhVZq1JYi4qvGU5251XS3DC1/7/WRdh1WLyHDLJY1q/Zde0cP5+8fw=) 2025-11-11 00:19:43.179230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6yb5kqblEBlJt7vVGRgosuBsOWf76jqkz/HFsyq1ENTzi2GH0XHrpIHoV/s0nRJBHTWaQp8/AgDs9ywUC2JBGIYE7wxp6qyF5k20+BKo8lpjmKZOf286nO94fHJpQieOsHtN30IhJL5ZukYKhrgHtnnKmgfPUhXYz8kV/9n6lNwzdMt3J43thceshUpdfdYxSAhryH60CJFOhO402QsHx3iiz+ewoK/KgF9Fkf0hXBxPRxrPf9gz9z5AqRSyBFuP0d1vi/keSPU1/PXW/5TodqJ5tVR0uCsbwE/QN/4cvoDs6JVcyYidQ8dgzPYZD8SSOheveZTPHF9e+q88ZIKxRn8vvpgfneOI8UGBvUn22K1XcsNPqraV7XlqQ1RY1bmpkDiNcAWYtDGHb1py6Qb/mauTVJH6u1b4lUQ9/oT/YMIrrIsx9GZNML/uvE0+8eLtbE5F2kEkR8ksuJnlU2Kt7CpnjUVZCiJ207qDpacfyG0nEAbCNWGlZD8gDa661GkU=) 2025-11-11 00:19:43.179241 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEaEalgybBsRwrHi+3x3Hg4bbs3oxq9+qAHfEH/Wcbq6) 2025-11-11 00:19:43.179252 | orchestrator | 2025-11-11 00:19:43.179263 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-11-11 00:19:43.179273 | orchestrator | Tuesday 11 November 2025 00:19:42 +0000 (0:00:01.031) 0:00:25.697 ****** 2025-11-11 00:19:43.179285 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-11 00:19:43.179320 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-11 00:19:43.179331 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-11 00:19:43.179342 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-11 00:19:43.179352 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-11 00:19:43.179363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-11 00:19:43.179374 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-11 00:19:43.179385 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:19:43.179396 | orchestrator | 2025-11-11 00:19:43.179423 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-11-11 00:19:43.179436 | orchestrator | Tuesday 11 November 2025 00:19:42 +0000 (0:00:00.175) 0:00:25.873 ****** 2025-11-11 00:19:43.179447 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:19:43.179460 | orchestrator | 2025-11-11 00:19:43.179471 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-11-11 00:19:43.179483 | orchestrator | Tuesday 11 November 2025 00:19:42 +0000 (0:00:00.066) 0:00:25.940 ****** 2025-11-11 00:19:43.179495 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:19:43.179514 | orchestrator | 2025-11-11 00:19:43.179526 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-11-11 00:19:43.179538 | orchestrator | Tuesday 11 November 2025 00:19:42 +0000 (0:00:00.066) 0:00:26.006 ****** 2025-11-11 00:19:43.179550 | orchestrator | changed: [testbed-manager] 2025-11-11 00:19:43.179562 | orchestrator | 2025-11-11 00:19:43.179574 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:19:43.179587 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-11 00:19:43.179600 | orchestrator | 2025-11-11 00:19:43.179613 | orchestrator | 2025-11-11 00:19:43.179625 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:19:43.179636 | orchestrator | Tuesday 11 November 2025 00:19:43 +0000 (0:00:00.689) 0:00:26.696 ****** 2025-11-11 00:19:43.179649 | orchestrator | =============================================================================== 2025-11-11 00:19:43.179661 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.68s 2025-11-11 00:19:43.179673 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.06s 2025-11-11 00:19:43.179686 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-11-11 00:19:43.179698 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-11 00:19:43.179710 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-11 00:19:43.179722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-11 00:19:43.179734 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-11 00:19:43.179746 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-11 00:19:43.179758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-11-11 00:19:43.179771 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-11-11 00:19:43.179790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-11-11 00:19:43.179801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-11-11 00:19:43.179812 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-11-11 00:19:43.179823 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-11-11 00:19:43.179833 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-11-11 00:19:43.179844 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-11-11 00:19:43.179854 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2025-11-11 00:19:43.179865 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-11-11 00:19:43.179876 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-11-11 00:19:43.179887 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-11-11 00:19:43.496083 | orchestrator | + osism apply squid 2025-11-11 00:19:55.468673 | orchestrator | 2025-11-11 00:19:55 | INFO  | Task e75e58bf-ef8e-447f-8a29-cfb2ebd49fde (squid) was prepared for execution. 2025-11-11 00:19:55.468803 | orchestrator | 2025-11-11 00:19:55 | INFO  | It takes a moment until task e75e58bf-ef8e-447f-8a29-cfb2ebd49fde (squid) has been started and output is visible here. 2025-11-11 00:21:48.348300 | orchestrator | 2025-11-11 00:21:48.348430 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-11-11 00:21:48.348446 | orchestrator | 2025-11-11 00:21:48.348459 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-11-11 00:21:48.348470 | orchestrator | Tuesday 11 November 2025 00:19:59 +0000 (0:00:00.157) 0:00:00.157 ****** 2025-11-11 00:21:48.348505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-11-11 00:21:48.348519 | orchestrator | 2025-11-11 00:21:48.348530 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-11-11 00:21:48.348542 | orchestrator | Tuesday 11 November 2025 00:19:59 +0000 (0:00:00.069) 0:00:00.227 ****** 2025-11-11 00:21:48.348553 | orchestrator | ok: [testbed-manager] 2025-11-11 00:21:48.348566 | orchestrator | 2025-11-11 00:21:48.348577 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-11-11 00:21:48.348588 | orchestrator | Tuesday 11 November 2025 00:20:00 +0000 (0:00:01.132) 0:00:01.360 ****** 2025-11-11 00:21:48.348601 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-11-11 00:21:48.348612 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-11-11 00:21:48.348624 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-11-11 00:21:48.348635 | orchestrator | 2025-11-11 00:21:48.348646 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-11-11 00:21:48.348657 | orchestrator | Tuesday 11 November 2025 00:20:01 +0000 (0:00:01.006) 0:00:02.366 ****** 2025-11-11 00:21:48.348668 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-11-11 00:21:48.348680 | orchestrator | 2025-11-11 00:21:48.348691 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-11-11 00:21:48.348702 | orchestrator | Tuesday 11 November 2025 00:20:02 +0000 (0:00:00.941) 0:00:03.308 ****** 2025-11-11 00:21:48.348712 | orchestrator | ok: [testbed-manager] 2025-11-11 00:21:48.348723 | orchestrator | 2025-11-11 00:21:48.348734 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-11-11 00:21:48.348745 | orchestrator | Tuesday 11 November 2025 00:20:02 +0000 (0:00:00.311) 0:00:03.619 ****** 2025-11-11 00:21:48.348756 | orchestrator | changed: [testbed-manager] 2025-11-11 00:21:48.348767 | orchestrator | 2025-11-11 00:21:48.348778 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-11-11 00:21:48.348790 | orchestrator | Tuesday 11 November 2025 00:20:03 +0000 (0:00:00.832) 0:00:04.452 ****** 2025-11-11 00:21:48.348802 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-11-11 00:21:48.348815 | orchestrator | ok: [testbed-manager] 2025-11-11 00:21:48.348827 | orchestrator | 2025-11-11 00:21:48.348839 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-11-11 00:21:48.348851 | orchestrator | Tuesday 11 November 2025 00:20:35 +0000 (0:00:31.469) 0:00:35.921 ****** 2025-11-11 00:21:48.348862 | orchestrator | changed: [testbed-manager] 2025-11-11 00:21:48.348874 | orchestrator | 2025-11-11 00:21:48.348887 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-11-11 00:21:48.348899 | orchestrator | Tuesday 11 November 2025 00:20:47 +0000 (0:00:12.073) 0:00:47.995 ****** 2025-11-11 00:21:48.348911 | orchestrator | Pausing for 60 seconds 2025-11-11 00:21:48.348924 | orchestrator | changed: [testbed-manager] 2025-11-11 00:21:48.348936 | orchestrator | 2025-11-11 00:21:48.348948 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-11-11 00:21:48.348960 | orchestrator | Tuesday 11 November 2025 00:21:47 +0000 (0:01:00.077) 0:01:48.073 ****** 2025-11-11 00:21:48.348972 | orchestrator | ok: [testbed-manager] 2025-11-11 00:21:48.348985 | orchestrator | 2025-11-11 00:21:48.348997 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-11-11 00:21:48.349009 | orchestrator | Tuesday 11 November 2025 00:21:47 +0000 (0:00:00.068) 0:01:48.141 ****** 2025-11-11 00:21:48.349021 | orchestrator | changed: [testbed-manager] 2025-11-11 00:21:48.349033 | orchestrator | 2025-11-11 00:21:48.349044 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:21:48.349057 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:21:48.349077 | orchestrator | 2025-11-11 00:21:48.349088 | orchestrator | 2025-11-11 00:21:48.349100 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:21:48.349113 | orchestrator | Tuesday 11 November 2025 00:21:48 +0000 (0:00:00.632) 0:01:48.773 ****** 2025-11-11 00:21:48.349125 | orchestrator | =============================================================================== 2025-11-11 00:21:48.349137 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-11-11 00:21:48.349149 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.47s 2025-11-11 00:21:48.349160 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.07s 2025-11-11 00:21:48.349171 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.13s 2025-11-11 00:21:48.349200 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2025-11-11 00:21:48.349213 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2025-11-11 00:21:48.349224 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-11-11 00:21:48.349234 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-11-11 00:21:48.349245 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-11-11 00:21:48.349256 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-11-11 00:21:48.349267 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-11-11 00:21:48.609668 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-11 00:21:48.609758 | orchestrator | ++ semver latest 9.0.0 2025-11-11 00:21:48.653824 | orchestrator | + [[ -1 -lt 0 ]] 2025-11-11 00:21:48.653883 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-11 00:21:48.654253 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-11-11 00:22:00.623151 | orchestrator | 2025-11-11 00:22:00 | INFO  | Task 896480b5-0ea4-4a64-90ad-a374a28bb8eb (operator) was prepared for execution. 2025-11-11 00:22:00.623334 | orchestrator | 2025-11-11 00:22:00 | INFO  | It takes a moment until task 896480b5-0ea4-4a64-90ad-a374a28bb8eb (operator) has been started and output is visible here. 2025-11-11 00:22:15.794650 | orchestrator | 2025-11-11 00:22:15.794779 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-11-11 00:22:15.794797 | orchestrator | 2025-11-11 00:22:15.794810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-11 00:22:15.794821 | orchestrator | Tuesday 11 November 2025 00:22:04 +0000 (0:00:00.139) 0:00:00.139 ****** 2025-11-11 00:22:15.794833 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:22:15.794846 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:22:15.794857 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:22:15.794868 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:22:15.794879 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:22:15.794889 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:22:15.794900 | orchestrator | 2025-11-11 00:22:15.794911 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-11-11 00:22:15.794922 | orchestrator | Tuesday 11 November 2025 00:22:07 +0000 (0:00:03.127) 0:00:03.267 ****** 2025-11-11 00:22:15.794933 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:22:15.794943 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:22:15.794955 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:22:15.794966 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:22:15.794977 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:22:15.794987 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:22:15.794998 | orchestrator | 2025-11-11 00:22:15.795014 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-11-11 00:22:15.795025 | orchestrator | 2025-11-11 00:22:15.795036 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-11 00:22:15.795047 | orchestrator | Tuesday 11 November 2025 00:22:08 +0000 (0:00:00.718) 0:00:03.986 ****** 2025-11-11 00:22:15.795058 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:22:15.795091 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:22:15.795103 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:22:15.795113 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:22:15.795124 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:22:15.795135 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:22:15.795145 | orchestrator | 2025-11-11 00:22:15.795156 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-11 00:22:15.795194 | orchestrator | Tuesday 11 November 2025 00:22:08 +0000 (0:00:00.143) 0:00:04.129 ****** 2025-11-11 00:22:15.795206 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:22:15.795218 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:22:15.795230 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:22:15.795242 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:22:15.795254 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:22:15.795267 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:22:15.795279 | orchestrator | 2025-11-11 00:22:15.795309 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-11 00:22:15.795322 | orchestrator | Tuesday 11 November 2025 00:22:08 +0000 (0:00:00.161) 0:00:04.291 ****** 2025-11-11 00:22:15.795335 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:15.795348 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:15.795360 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:15.795372 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:15.795384 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:15.795396 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:15.795408 | orchestrator | 2025-11-11 00:22:15.795420 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-11 00:22:15.795432 | orchestrator | Tuesday 11 November 2025 00:22:09 +0000 (0:00:00.577) 0:00:04.869 ****** 2025-11-11 00:22:15.795444 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:15.795456 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:15.795468 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:15.795480 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:15.795497 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:15.795510 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:15.795522 | orchestrator | 2025-11-11 00:22:15.795533 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-11 00:22:15.795544 | orchestrator | Tuesday 11 November 2025 00:22:10 +0000 (0:00:00.769) 0:00:05.638 ****** 2025-11-11 00:22:15.795555 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-11-11 00:22:15.795566 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-11-11 00:22:15.795577 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-11-11 00:22:15.795587 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-11-11 00:22:15.795598 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-11-11 00:22:15.795609 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-11-11 00:22:15.795619 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-11-11 00:22:15.795630 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-11-11 00:22:15.795641 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-11-11 00:22:15.795652 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-11-11 00:22:15.795663 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-11-11 00:22:15.795673 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-11-11 00:22:15.795684 | orchestrator | 2025-11-11 00:22:15.795695 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-11 00:22:15.795706 | orchestrator | Tuesday 11 November 2025 00:22:11 +0000 (0:00:01.155) 0:00:06.793 ****** 2025-11-11 00:22:15.795717 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:15.795728 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:15.795738 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:15.795749 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:15.795760 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:15.795770 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:15.795781 | orchestrator | 2025-11-11 00:22:15.795801 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-11 00:22:15.795813 | orchestrator | Tuesday 11 November 2025 00:22:12 +0000 (0:00:01.113) 0:00:07.907 ****** 2025-11-11 00:22:15.795824 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-11-11 00:22:15.795835 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-11-11 00:22:15.795846 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-11-11 00:22:15.795857 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795892 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795912 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795931 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795949 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795966 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-11-11 00:22:15.795977 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.795988 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.795998 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.796009 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.796019 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.796030 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-11-11 00:22:15.796040 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796051 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796061 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796072 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796082 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796093 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-11-11 00:22:15.796103 | orchestrator | 2025-11-11 00:22:15.796114 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-11 00:22:15.796125 | orchestrator | Tuesday 11 November 2025 00:22:13 +0000 (0:00:01.182) 0:00:09.089 ****** 2025-11-11 00:22:15.796135 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:15.796145 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:15.796156 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:15.796197 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:15.796208 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:15.796218 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:15.796229 | orchestrator | 2025-11-11 00:22:15.796240 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-11 00:22:15.796250 | orchestrator | Tuesday 11 November 2025 00:22:13 +0000 (0:00:00.211) 0:00:09.301 ****** 2025-11-11 00:22:15.796261 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:15.796272 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:15.796283 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:15.796293 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:15.796304 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:15.796315 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:15.796325 | orchestrator | 2025-11-11 00:22:15.796336 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-11 00:22:15.796347 | orchestrator | Tuesday 11 November 2025 00:22:14 +0000 (0:00:00.183) 0:00:09.485 ****** 2025-11-11 00:22:15.796357 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:15.796368 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:15.796379 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:15.796399 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:15.796409 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:15.796420 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:15.796431 | orchestrator | 2025-11-11 00:22:15.796441 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-11 00:22:15.796452 | orchestrator | Tuesday 11 November 2025 00:22:14 +0000 (0:00:00.583) 0:00:10.068 ****** 2025-11-11 00:22:15.796463 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:15.796473 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:15.796484 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:15.796494 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:15.796505 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:15.796515 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:15.796526 | orchestrator | 2025-11-11 00:22:15.796537 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-11 00:22:15.796547 | orchestrator | Tuesday 11 November 2025 00:22:14 +0000 (0:00:00.202) 0:00:10.270 ****** 2025-11-11 00:22:15.796558 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-11 00:22:15.796569 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-11 00:22:15.796580 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-11 00:22:15.796591 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:15.796601 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:15.796612 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:15.796622 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:22:15.796633 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:15.796644 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-11 00:22:15.796654 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:15.796665 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-11 00:22:15.796676 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:15.796686 | orchestrator | 2025-11-11 00:22:15.796697 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-11 00:22:15.796708 | orchestrator | Tuesday 11 November 2025 00:22:15 +0000 (0:00:00.682) 0:00:10.952 ****** 2025-11-11 00:22:15.796719 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:15.796729 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:15.796740 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:15.796750 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:15.796761 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:15.796771 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:15.796782 | orchestrator | 2025-11-11 00:22:15.796792 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-11 00:22:15.796803 | orchestrator | Tuesday 11 November 2025 00:22:15 +0000 (0:00:00.156) 0:00:11.108 ****** 2025-11-11 00:22:15.796814 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:15.796825 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:15.796835 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:15.796846 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:15.796865 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:17.072906 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:17.073018 | orchestrator | 2025-11-11 00:22:17.073034 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-11 00:22:17.073048 | orchestrator | Tuesday 11 November 2025 00:22:15 +0000 (0:00:00.154) 0:00:11.263 ****** 2025-11-11 00:22:17.073060 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:17.073070 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:17.073081 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:17.073092 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:17.073103 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:17.073114 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:17.073126 | orchestrator | 2025-11-11 00:22:17.073137 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-11 00:22:17.073223 | orchestrator | Tuesday 11 November 2025 00:22:15 +0000 (0:00:00.178) 0:00:11.442 ****** 2025-11-11 00:22:17.073237 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:22:17.073248 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:22:17.073258 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:22:17.073269 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:22:17.073280 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:22:17.073291 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:22:17.073302 | orchestrator | 2025-11-11 00:22:17.073313 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-11 00:22:17.073324 | orchestrator | Tuesday 11 November 2025 00:22:16 +0000 (0:00:00.644) 0:00:12.086 ****** 2025-11-11 00:22:17.073334 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:22:17.073345 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:22:17.073356 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:22:17.073366 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:22:17.073377 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:22:17.073387 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:22:17.073398 | orchestrator | 2025-11-11 00:22:17.073409 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:22:17.073421 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073451 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073464 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073476 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073489 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073506 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-11 00:22:17.073518 | orchestrator | 2025-11-11 00:22:17.073530 | orchestrator | 2025-11-11 00:22:17.073542 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:22:17.073555 | orchestrator | Tuesday 11 November 2025 00:22:16 +0000 (0:00:00.237) 0:00:12.323 ****** 2025-11-11 00:22:17.073567 | orchestrator | =============================================================================== 2025-11-11 00:22:17.073580 | orchestrator | Gathering Facts --------------------------------------------------------- 3.13s 2025-11-11 00:22:17.073593 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2025-11-11 00:22:17.073607 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-11-11 00:22:17.073619 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.11s 2025-11-11 00:22:17.073632 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-11-11 00:22:17.073644 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2025-11-11 00:22:17.073656 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-11-11 00:22:17.073668 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-11-11 00:22:17.073680 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-11-11 00:22:17.073692 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-11-11 00:22:17.073704 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-11-11 00:22:17.073716 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.21s 2025-11-11 00:22:17.073738 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-11-11 00:22:17.073750 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2025-11-11 00:22:17.073762 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-11-11 00:22:17.073775 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-11-11 00:22:17.073787 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-11-11 00:22:17.073799 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-11-11 00:22:17.073810 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-11-11 00:22:17.438249 | orchestrator | + osism apply --environment custom facts 2025-11-11 00:22:19.305855 | orchestrator | 2025-11-11 00:22:19 | INFO  | Trying to run play facts in environment custom 2025-11-11 00:22:29.437565 | orchestrator | 2025-11-11 00:22:29 | INFO  | Task b3486762-7686-4b50-9f31-24c5bbc73be0 (facts) was prepared for execution. 2025-11-11 00:22:29.437692 | orchestrator | 2025-11-11 00:22:29 | INFO  | It takes a moment until task b3486762-7686-4b50-9f31-24c5bbc73be0 (facts) has been started and output is visible here. 2025-11-11 00:23:12.137193 | orchestrator | 2025-11-11 00:23:12.137268 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-11-11 00:23:12.137277 | orchestrator | 2025-11-11 00:23:12.137283 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-11 00:23:12.137290 | orchestrator | Tuesday 11 November 2025 00:22:32 +0000 (0:00:00.060) 0:00:00.060 ****** 2025-11-11 00:23:12.137297 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:12.137304 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:12.137310 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:12.137316 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137322 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:12.137329 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137335 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137341 | orchestrator | 2025-11-11 00:23:12.137348 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-11-11 00:23:12.137354 | orchestrator | Tuesday 11 November 2025 00:22:34 +0000 (0:00:01.312) 0:00:01.373 ****** 2025-11-11 00:23:12.137360 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:12.137367 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:12.137373 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:12.137379 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137386 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137392 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137398 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:12.137405 | orchestrator | 2025-11-11 00:23:12.137411 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-11-11 00:23:12.137417 | orchestrator | 2025-11-11 00:23:12.137423 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-11 00:23:12.137429 | orchestrator | Tuesday 11 November 2025 00:22:35 +0000 (0:00:01.044) 0:00:02.417 ****** 2025-11-11 00:23:12.137436 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137442 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137448 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137455 | orchestrator | 2025-11-11 00:23:12.137461 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-11 00:23:12.137468 | orchestrator | Tuesday 11 November 2025 00:22:35 +0000 (0:00:00.074) 0:00:02.491 ****** 2025-11-11 00:23:12.137474 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137480 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137486 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137492 | orchestrator | 2025-11-11 00:23:12.137499 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-11 00:23:12.137520 | orchestrator | Tuesday 11 November 2025 00:22:35 +0000 (0:00:00.168) 0:00:02.660 ****** 2025-11-11 00:23:12.137526 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137532 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137538 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137544 | orchestrator | 2025-11-11 00:23:12.137558 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-11 00:23:12.137564 | orchestrator | Tuesday 11 November 2025 00:22:35 +0000 (0:00:00.169) 0:00:02.829 ****** 2025-11-11 00:23:12.137571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:23:12.137578 | orchestrator | 2025-11-11 00:23:12.137584 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-11 00:23:12.137590 | orchestrator | Tuesday 11 November 2025 00:22:35 +0000 (0:00:00.107) 0:00:02.937 ****** 2025-11-11 00:23:12.137596 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137602 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137608 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137615 | orchestrator | 2025-11-11 00:23:12.137621 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-11 00:23:12.137627 | orchestrator | Tuesday 11 November 2025 00:22:36 +0000 (0:00:00.391) 0:00:03.328 ****** 2025-11-11 00:23:12.137633 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:12.137639 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:12.137645 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:12.137651 | orchestrator | 2025-11-11 00:23:12.137657 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-11 00:23:12.137663 | orchestrator | Tuesday 11 November 2025 00:22:36 +0000 (0:00:00.122) 0:00:03.450 ****** 2025-11-11 00:23:12.137669 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137675 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137681 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137687 | orchestrator | 2025-11-11 00:23:12.137693 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-11 00:23:12.137699 | orchestrator | Tuesday 11 November 2025 00:22:37 +0000 (0:00:00.942) 0:00:04.392 ****** 2025-11-11 00:23:12.137705 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137711 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137717 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137723 | orchestrator | 2025-11-11 00:23:12.137729 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-11 00:23:12.137736 | orchestrator | Tuesday 11 November 2025 00:22:37 +0000 (0:00:00.430) 0:00:04.823 ****** 2025-11-11 00:23:12.137741 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137747 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137753 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137760 | orchestrator | 2025-11-11 00:23:12.137766 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-11 00:23:12.137772 | orchestrator | Tuesday 11 November 2025 00:22:38 +0000 (0:00:00.950) 0:00:05.774 ****** 2025-11-11 00:23:12.137778 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137784 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137790 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137796 | orchestrator | 2025-11-11 00:23:12.137802 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-11-11 00:23:12.137808 | orchestrator | Tuesday 11 November 2025 00:22:56 +0000 (0:00:17.916) 0:00:23.690 ****** 2025-11-11 00:23:12.137814 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:12.137820 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:12.137826 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:12.137832 | orchestrator | 2025-11-11 00:23:12.137838 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-11-11 00:23:12.137855 | orchestrator | Tuesday 11 November 2025 00:22:56 +0000 (0:00:00.109) 0:00:23.799 ****** 2025-11-11 00:23:12.137867 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:12.137872 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:12.137879 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:12.137885 | orchestrator | 2025-11-11 00:23:12.137891 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-11 00:23:12.137897 | orchestrator | Tuesday 11 November 2025 00:23:03 +0000 (0:00:07.143) 0:00:30.943 ****** 2025-11-11 00:23:12.137902 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.137908 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.137914 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.137921 | orchestrator | 2025-11-11 00:23:12.137926 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-11 00:23:12.137932 | orchestrator | Tuesday 11 November 2025 00:23:04 +0000 (0:00:00.414) 0:00:31.358 ****** 2025-11-11 00:23:12.137938 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-11-11 00:23:12.137944 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-11-11 00:23:12.137949 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-11-11 00:23:12.137955 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-11-11 00:23:12.137961 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-11-11 00:23:12.137967 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-11-11 00:23:12.137972 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-11-11 00:23:12.137979 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-11-11 00:23:12.137985 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-11-11 00:23:12.137991 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-11-11 00:23:12.137998 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-11-11 00:23:12.138004 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-11-11 00:23:12.138010 | orchestrator | 2025-11-11 00:23:12.138049 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-11 00:23:12.138056 | orchestrator | Tuesday 11 November 2025 00:23:07 +0000 (0:00:03.355) 0:00:34.713 ****** 2025-11-11 00:23:12.138063 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.138069 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.138076 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.138082 | orchestrator | 2025-11-11 00:23:12.138088 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-11 00:23:12.138095 | orchestrator | 2025-11-11 00:23:12.138102 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:23:12.138108 | orchestrator | Tuesday 11 November 2025 00:23:08 +0000 (0:00:01.124) 0:00:35.837 ****** 2025-11-11 00:23:12.138126 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:12.138133 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:12.138140 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:12.138146 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:12.138152 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:12.138181 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:12.138188 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:12.138194 | orchestrator | 2025-11-11 00:23:12.138200 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:23:12.138207 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:23:12.138214 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:23:12.138222 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:23:12.138228 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:23:12.138239 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:23:12.138246 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:23:12.138252 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:23:12.138258 | orchestrator | 2025-11-11 00:23:12.138264 | orchestrator | 2025-11-11 00:23:12.138270 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:23:12.138277 | orchestrator | Tuesday 11 November 2025 00:23:12 +0000 (0:00:03.375) 0:00:39.213 ****** 2025-11-11 00:23:12.138283 | orchestrator | =============================================================================== 2025-11-11 00:23:12.138289 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.92s 2025-11-11 00:23:12.138295 | orchestrator | Install required packages (Debian) -------------------------------------- 7.14s 2025-11-11 00:23:12.138302 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.38s 2025-11-11 00:23:12.138308 | orchestrator | Copy fact files --------------------------------------------------------- 3.36s 2025-11-11 00:23:12.138314 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2025-11-11 00:23:12.138320 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.12s 2025-11-11 00:23:12.138332 | orchestrator | Copy fact file ---------------------------------------------------------- 1.04s 2025-11-11 00:23:12.348210 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.95s 2025-11-11 00:23:12.348290 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.94s 2025-11-11 00:23:12.348302 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-11-11 00:23:12.348312 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-11-11 00:23:12.348322 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.39s 2025-11-11 00:23:12.348331 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-11-11 00:23:12.348340 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2025-11-11 00:23:12.348350 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-11-11 00:23:12.348359 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-11-11 00:23:12.348368 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-11-11 00:23:12.348378 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2025-11-11 00:23:12.617604 | orchestrator | + osism apply bootstrap 2025-11-11 00:23:24.586756 | orchestrator | 2025-11-11 00:23:24 | INFO  | Task 64bc4fb8-afa7-455c-9268-66de9ef90c60 (bootstrap) was prepared for execution. 2025-11-11 00:23:24.586879 | orchestrator | 2025-11-11 00:23:24 | INFO  | It takes a moment until task 64bc4fb8-afa7-455c-9268-66de9ef90c60 (bootstrap) has been started and output is visible here. 2025-11-11 00:23:40.268998 | orchestrator | 2025-11-11 00:23:40.269195 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-11-11 00:23:40.269226 | orchestrator | 2025-11-11 00:23:40.269247 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-11-11 00:23:40.269267 | orchestrator | Tuesday 11 November 2025 00:23:28 +0000 (0:00:00.103) 0:00:00.103 ****** 2025-11-11 00:23:40.269287 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:40.269309 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:40.269327 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:40.269378 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:40.269399 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:40.269419 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:40.269439 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:40.269459 | orchestrator | 2025-11-11 00:23:40.269499 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-11 00:23:40.269521 | orchestrator | 2025-11-11 00:23:40.269541 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:23:40.269563 | orchestrator | Tuesday 11 November 2025 00:23:28 +0000 (0:00:00.196) 0:00:00.299 ****** 2025-11-11 00:23:40.269585 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:40.269608 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:40.269629 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:40.269652 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:40.269672 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:40.269694 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:40.269716 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:40.269738 | orchestrator | 2025-11-11 00:23:40.269759 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-11-11 00:23:40.269778 | orchestrator | 2025-11-11 00:23:40.269798 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:23:40.269814 | orchestrator | Tuesday 11 November 2025 00:23:32 +0000 (0:00:04.446) 0:00:04.745 ****** 2025-11-11 00:23:40.269834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:23:40.269856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:23:40.269873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:23:40.269889 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-11 00:23:40.269905 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-11-11 00:23:40.269920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-11 00:23:40.269935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-11-11 00:23:40.269951 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-11 00:23:40.269967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-11-11 00:23:40.269982 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-11-11 00:23:40.269998 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-11 00:23:40.270090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-11-11 00:23:40.270141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-11-11 00:23:40.270158 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:40.270173 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-11 00:23:40.270189 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-11-11 00:23:40.270206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-11 00:23:40.270222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-11 00:23:40.270238 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-11 00:23:40.270255 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-11-11 00:23:40.270271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-11-11 00:23:40.270287 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:40.270303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-11 00:23:40.270319 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-11 00:23:40.270335 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-11-11 00:23:40.270350 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-11 00:23:40.270366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-11-11 00:23:40.270382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-11 00:23:40.270398 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-11 00:23:40.270432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-11-11 00:23:40.270449 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:40.270465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:23:40.270481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-11 00:23:40.270497 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-11 00:23:40.270513 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-11-11 00:23:40.270529 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-11 00:23:40.270545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:23:40.270562 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-11 00:23:40.270578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-11-11 00:23:40.270594 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-11-11 00:23:40.270610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:23:40.270626 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-11 00:23:40.270644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-11-11 00:23:40.270660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-11-11 00:23:40.270676 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-11 00:23:40.270694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-11-11 00:23:40.270742 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:40.270764 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-11-11 00:23:40.270781 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-11-11 00:23:40.270798 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:40.270815 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-11 00:23:40.270833 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-11-11 00:23:40.270850 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:40.270868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-11 00:23:40.270886 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-11 00:23:40.270904 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:40.270921 | orchestrator | 2025-11-11 00:23:40.270940 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-11-11 00:23:40.270958 | orchestrator | 2025-11-11 00:23:40.270977 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-11-11 00:23:40.270994 | orchestrator | Tuesday 11 November 2025 00:23:33 +0000 (0:00:00.388) 0:00:05.133 ****** 2025-11-11 00:23:40.271012 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:40.271030 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:40.271048 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:40.271065 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:40.271083 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:40.271129 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:40.271167 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:40.271187 | orchestrator | 2025-11-11 00:23:40.271206 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-11-11 00:23:40.271225 | orchestrator | Tuesday 11 November 2025 00:23:34 +0000 (0:00:01.088) 0:00:06.222 ****** 2025-11-11 00:23:40.271246 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:40.271264 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:40.271280 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:40.271298 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:40.271316 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:40.271335 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:40.271353 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:40.271370 | orchestrator | 2025-11-11 00:23:40.271388 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-11-11 00:23:40.271407 | orchestrator | Tuesday 11 November 2025 00:23:35 +0000 (0:00:01.120) 0:00:07.342 ****** 2025-11-11 00:23:40.271434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:23:40.271447 | orchestrator | 2025-11-11 00:23:40.271457 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-11-11 00:23:40.271467 | orchestrator | Tuesday 11 November 2025 00:23:35 +0000 (0:00:00.273) 0:00:07.615 ****** 2025-11-11 00:23:40.271477 | orchestrator | changed: [testbed-manager] 2025-11-11 00:23:40.271486 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:40.271495 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:40.271505 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:40.271514 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:40.271524 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:40.271533 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:40.271542 | orchestrator | 2025-11-11 00:23:40.271552 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-11-11 00:23:40.271561 | orchestrator | Tuesday 11 November 2025 00:23:37 +0000 (0:00:01.915) 0:00:09.531 ****** 2025-11-11 00:23:40.271571 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:40.271582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:23:40.271593 | orchestrator | 2025-11-11 00:23:40.271603 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-11-11 00:23:40.271612 | orchestrator | Tuesday 11 November 2025 00:23:38 +0000 (0:00:00.257) 0:00:09.788 ****** 2025-11-11 00:23:40.271622 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:40.271631 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:40.271641 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:40.271650 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:40.271659 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:40.271681 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:40.271691 | orchestrator | 2025-11-11 00:23:40.271700 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-11-11 00:23:40.271710 | orchestrator | Tuesday 11 November 2025 00:23:38 +0000 (0:00:00.958) 0:00:10.747 ****** 2025-11-11 00:23:40.271719 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:40.271729 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:40.271738 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:40.271747 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:40.271757 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:40.271766 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:40.271776 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:40.271785 | orchestrator | 2025-11-11 00:23:40.271795 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-11-11 00:23:40.271804 | orchestrator | Tuesday 11 November 2025 00:23:39 +0000 (0:00:00.534) 0:00:11.282 ****** 2025-11-11 00:23:40.271814 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:40.271824 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:40.271833 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:40.271842 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:40.271852 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:40.271861 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:40.271871 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:40.271880 | orchestrator | 2025-11-11 00:23:40.271890 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-11 00:23:40.271900 | orchestrator | Tuesday 11 November 2025 00:23:40 +0000 (0:00:00.599) 0:00:11.882 ****** 2025-11-11 00:23:40.271910 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:40.271919 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:40.271942 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:52.738898 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:52.739017 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:52.739027 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:52.739035 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:52.739043 | orchestrator | 2025-11-11 00:23:52.739052 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-11 00:23:52.739061 | orchestrator | Tuesday 11 November 2025 00:23:40 +0000 (0:00:00.235) 0:00:12.117 ****** 2025-11-11 00:23:52.739167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:23:52.739189 | orchestrator | 2025-11-11 00:23:52.739197 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-11 00:23:52.739205 | orchestrator | Tuesday 11 November 2025 00:23:40 +0000 (0:00:00.296) 0:00:12.413 ****** 2025-11-11 00:23:52.739213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:23:52.739221 | orchestrator | 2025-11-11 00:23:52.739228 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-11 00:23:52.739235 | orchestrator | Tuesday 11 November 2025 00:23:40 +0000 (0:00:00.316) 0:00:12.730 ****** 2025-11-11 00:23:52.739243 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.739251 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.739258 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.739266 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739273 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.739280 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.739287 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.739294 | orchestrator | 2025-11-11 00:23:52.739302 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-11 00:23:52.739309 | orchestrator | Tuesday 11 November 2025 00:23:42 +0000 (0:00:01.281) 0:00:14.011 ****** 2025-11-11 00:23:52.739316 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:52.739323 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:52.739330 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:52.739338 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:52.739345 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:52.739352 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:52.739359 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:52.739366 | orchestrator | 2025-11-11 00:23:52.739374 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-11 00:23:52.739381 | orchestrator | Tuesday 11 November 2025 00:23:42 +0000 (0:00:00.234) 0:00:14.245 ****** 2025-11-11 00:23:52.739388 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.739395 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.739403 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.739411 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.739418 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.739425 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.739432 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739440 | orchestrator | 2025-11-11 00:23:52.739447 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-11 00:23:52.739454 | orchestrator | Tuesday 11 November 2025 00:23:43 +0000 (0:00:00.538) 0:00:14.784 ****** 2025-11-11 00:23:52.739462 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:52.739469 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:52.739476 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:52.739483 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:52.739490 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:52.739497 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:52.739505 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:52.739532 | orchestrator | 2025-11-11 00:23:52.739540 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-11 00:23:52.739549 | orchestrator | Tuesday 11 November 2025 00:23:43 +0000 (0:00:00.243) 0:00:15.028 ****** 2025-11-11 00:23:52.739556 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:52.739563 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:52.739570 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:52.739577 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:52.739584 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:52.739591 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:52.739598 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739606 | orchestrator | 2025-11-11 00:23:52.739613 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-11 00:23:52.739620 | orchestrator | Tuesday 11 November 2025 00:23:43 +0000 (0:00:00.561) 0:00:15.590 ****** 2025-11-11 00:23:52.739627 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:52.739634 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:52.739641 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:52.739648 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:52.739655 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739662 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:52.739669 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:52.739676 | orchestrator | 2025-11-11 00:23:52.739683 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-11 00:23:52.739690 | orchestrator | Tuesday 11 November 2025 00:23:45 +0000 (0:00:01.179) 0:00:16.770 ****** 2025-11-11 00:23:52.739698 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.739705 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.739712 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.739719 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.739726 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.739733 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.739740 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739747 | orchestrator | 2025-11-11 00:23:52.739755 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-11 00:23:52.739762 | orchestrator | Tuesday 11 November 2025 00:23:46 +0000 (0:00:01.955) 0:00:18.725 ****** 2025-11-11 00:23:52.739785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:23:52.739793 | orchestrator | 2025-11-11 00:23:52.739800 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-11 00:23:52.739807 | orchestrator | Tuesday 11 November 2025 00:23:47 +0000 (0:00:00.304) 0:00:19.030 ****** 2025-11-11 00:23:52.739814 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:52.739822 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:23:52.739829 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:52.739840 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:52.739847 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:52.739867 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:23:52.739874 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:23:52.739881 | orchestrator | 2025-11-11 00:23:52.739888 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-11 00:23:52.739902 | orchestrator | Tuesday 11 November 2025 00:23:48 +0000 (0:00:01.195) 0:00:20.225 ****** 2025-11-11 00:23:52.739909 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.739916 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.739923 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.739931 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.739938 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.739945 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.739952 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.739966 | orchestrator | 2025-11-11 00:23:52.739974 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-11 00:23:52.739981 | orchestrator | Tuesday 11 November 2025 00:23:48 +0000 (0:00:00.227) 0:00:20.453 ****** 2025-11-11 00:23:52.739988 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.739995 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.740002 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.740009 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740016 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740023 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740030 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740037 | orchestrator | 2025-11-11 00:23:52.740044 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-11 00:23:52.740051 | orchestrator | Tuesday 11 November 2025 00:23:48 +0000 (0:00:00.255) 0:00:20.709 ****** 2025-11-11 00:23:52.740058 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.740065 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.740072 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.740079 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740098 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740106 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740113 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740120 | orchestrator | 2025-11-11 00:23:52.740127 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-11 00:23:52.740135 | orchestrator | Tuesday 11 November 2025 00:23:49 +0000 (0:00:00.235) 0:00:20.944 ****** 2025-11-11 00:23:52.740143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:23:52.740152 | orchestrator | 2025-11-11 00:23:52.740159 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-11 00:23:52.740166 | orchestrator | Tuesday 11 November 2025 00:23:49 +0000 (0:00:00.285) 0:00:21.230 ****** 2025-11-11 00:23:52.740173 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.740180 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.740187 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.740194 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740201 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740208 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740215 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740222 | orchestrator | 2025-11-11 00:23:52.740229 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-11 00:23:52.740236 | orchestrator | Tuesday 11 November 2025 00:23:50 +0000 (0:00:00.541) 0:00:21.771 ****** 2025-11-11 00:23:52.740243 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:23:52.740250 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:23:52.740257 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:23:52.740264 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:23:52.740271 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:23:52.740278 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:23:52.740285 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:23:52.740292 | orchestrator | 2025-11-11 00:23:52.740299 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-11 00:23:52.740306 | orchestrator | Tuesday 11 November 2025 00:23:50 +0000 (0:00:00.270) 0:00:22.042 ****** 2025-11-11 00:23:52.740314 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:23:52.740320 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740328 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:23:52.740334 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:23:52.740341 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740348 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740356 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740363 | orchestrator | 2025-11-11 00:23:52.740370 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-11 00:23:52.740382 | orchestrator | Tuesday 11 November 2025 00:23:51 +0000 (0:00:00.931) 0:00:22.973 ****** 2025-11-11 00:23:52.740390 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:23:52.740397 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:23:52.740404 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:23:52.740411 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740418 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740425 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740432 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740439 | orchestrator | 2025-11-11 00:23:52.740446 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-11 00:23:52.740454 | orchestrator | Tuesday 11 November 2025 00:23:51 +0000 (0:00:00.501) 0:00:23.474 ****** 2025-11-11 00:23:52.740461 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:23:52.740468 | orchestrator | ok: [testbed-manager] 2025-11-11 00:23:52.740475 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:23:52.740482 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:23:52.740494 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.190400 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.190542 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.190559 | orchestrator | 2025-11-11 00:24:34.190573 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-11 00:24:34.190586 | orchestrator | Tuesday 11 November 2025 00:23:52 +0000 (0:00:01.018) 0:00:24.493 ****** 2025-11-11 00:24:34.190597 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.190609 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.190635 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.190647 | orchestrator | changed: [testbed-manager] 2025-11-11 00:24:34.190658 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.190670 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.190682 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.190693 | orchestrator | 2025-11-11 00:24:34.190704 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-11-11 00:24:34.190716 | orchestrator | Tuesday 11 November 2025 00:24:11 +0000 (0:00:18.618) 0:00:43.112 ****** 2025-11-11 00:24:34.190727 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.190738 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.190749 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.190759 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.190770 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.190781 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.190792 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.190803 | orchestrator | 2025-11-11 00:24:34.190814 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-11-11 00:24:34.190826 | orchestrator | Tuesday 11 November 2025 00:24:11 +0000 (0:00:00.242) 0:00:43.354 ****** 2025-11-11 00:24:34.190836 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.190847 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.190859 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.190870 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.190881 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.190891 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.190902 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.190914 | orchestrator | 2025-11-11 00:24:34.190926 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-11-11 00:24:34.190938 | orchestrator | Tuesday 11 November 2025 00:24:11 +0000 (0:00:00.225) 0:00:43.580 ****** 2025-11-11 00:24:34.190951 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.190963 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.190975 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.190987 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.191000 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.191011 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.191021 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.191032 | orchestrator | 2025-11-11 00:24:34.191043 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-11-11 00:24:34.191121 | orchestrator | Tuesday 11 November 2025 00:24:12 +0000 (0:00:00.218) 0:00:43.799 ****** 2025-11-11 00:24:34.191145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:24:34.191165 | orchestrator | 2025-11-11 00:24:34.191182 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-11-11 00:24:34.191199 | orchestrator | Tuesday 11 November 2025 00:24:12 +0000 (0:00:00.271) 0:00:44.071 ****** 2025-11-11 00:24:34.191239 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.191258 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.191275 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.191292 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.191308 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.191325 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.191344 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.191362 | orchestrator | 2025-11-11 00:24:34.191381 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-11-11 00:24:34.191398 | orchestrator | Tuesday 11 November 2025 00:24:13 +0000 (0:00:01.446) 0:00:45.517 ****** 2025-11-11 00:24:34.191416 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.191435 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:24:34.191450 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.191461 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.191472 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:24:34.191482 | orchestrator | changed: [testbed-manager] 2025-11-11 00:24:34.191493 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:24:34.191503 | orchestrator | 2025-11-11 00:24:34.191514 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-11-11 00:24:34.191525 | orchestrator | Tuesday 11 November 2025 00:24:14 +0000 (0:00:00.983) 0:00:46.501 ****** 2025-11-11 00:24:34.191535 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.191546 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.191557 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.191567 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.191578 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.191588 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.191599 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.191609 | orchestrator | 2025-11-11 00:24:34.191620 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-11-11 00:24:34.191630 | orchestrator | Tuesday 11 November 2025 00:24:15 +0000 (0:00:00.752) 0:00:47.254 ****** 2025-11-11 00:24:34.191642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:24:34.191655 | orchestrator | 2025-11-11 00:24:34.191666 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-11-11 00:24:34.191678 | orchestrator | Tuesday 11 November 2025 00:24:15 +0000 (0:00:00.294) 0:00:47.548 ****** 2025-11-11 00:24:34.191688 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.191699 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.191709 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.191720 | orchestrator | changed: [testbed-manager] 2025-11-11 00:24:34.191731 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:24:34.191741 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:24:34.191751 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:24:34.191762 | orchestrator | 2025-11-11 00:24:34.191794 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-11-11 00:24:34.191806 | orchestrator | Tuesday 11 November 2025 00:24:16 +0000 (0:00:00.900) 0:00:48.449 ****** 2025-11-11 00:24:34.191817 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:24:34.191827 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:24:34.191850 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:24:34.191860 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:24:34.191871 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:24:34.191881 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:24:34.191892 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:24:34.191902 | orchestrator | 2025-11-11 00:24:34.191919 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2025-11-11 00:24:34.191931 | orchestrator | Tuesday 11 November 2025 00:24:16 +0000 (0:00:00.196) 0:00:48.646 ****** 2025-11-11 00:24:34.191942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:24:34.191953 | orchestrator | 2025-11-11 00:24:34.191964 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2025-11-11 00:24:34.191974 | orchestrator | Tuesday 11 November 2025 00:24:17 +0000 (0:00:00.295) 0:00:48.941 ****** 2025-11-11 00:24:34.191985 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.191995 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.192006 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.192017 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.192027 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.192038 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.192049 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.192059 | orchestrator | 2025-11-11 00:24:34.192104 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2025-11-11 00:24:34.192115 | orchestrator | Tuesday 11 November 2025 00:24:18 +0000 (0:00:01.571) 0:00:50.513 ****** 2025-11-11 00:24:34.192126 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.192137 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.192148 | orchestrator | changed: [testbed-manager] 2025-11-11 00:24:34.192158 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.192169 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:24:34.192180 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:24:34.192190 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:24:34.192201 | orchestrator | 2025-11-11 00:24:34.192212 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-11-11 00:24:34.192223 | orchestrator | Tuesday 11 November 2025 00:24:19 +0000 (0:00:01.003) 0:00:51.516 ****** 2025-11-11 00:24:34.192234 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:24:34.192245 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:24:34.192255 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:24:34.192266 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:24:34.192277 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:24:34.192288 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:24:34.192299 | orchestrator | changed: [testbed-manager] 2025-11-11 00:24:34.192310 | orchestrator | 2025-11-11 00:24:34.192321 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-11-11 00:24:34.192332 | orchestrator | Tuesday 11 November 2025 00:24:31 +0000 (0:00:11.771) 0:01:03.288 ****** 2025-11-11 00:24:34.192343 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.192354 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.192364 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.192375 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.192386 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.192397 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.192408 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.192419 | orchestrator | 2025-11-11 00:24:34.192430 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-11-11 00:24:34.192441 | orchestrator | Tuesday 11 November 2025 00:24:32 +0000 (0:00:01.131) 0:01:04.420 ****** 2025-11-11 00:24:34.192452 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.192462 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.192473 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.192484 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.192502 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.192513 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.192524 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.192534 | orchestrator | 2025-11-11 00:24:34.192545 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-11-11 00:24:34.192556 | orchestrator | Tuesday 11 November 2025 00:24:33 +0000 (0:00:00.818) 0:01:05.238 ****** 2025-11-11 00:24:34.192567 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.192578 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.192589 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.192599 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.192610 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.192621 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.192632 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.192643 | orchestrator | 2025-11-11 00:24:34.192653 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-11-11 00:24:34.192665 | orchestrator | Tuesday 11 November 2025 00:24:33 +0000 (0:00:00.207) 0:01:05.445 ****** 2025-11-11 00:24:34.192676 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:24:34.192686 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:24:34.192697 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:24:34.192708 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:24:34.192719 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:24:34.192729 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:24:34.192740 | orchestrator | ok: [testbed-manager] 2025-11-11 00:24:34.192751 | orchestrator | 2025-11-11 00:24:34.192762 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-11-11 00:24:34.192773 | orchestrator | Tuesday 11 November 2025 00:24:33 +0000 (0:00:00.212) 0:01:05.658 ****** 2025-11-11 00:24:34.192785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:24:34.192796 | orchestrator | 2025-11-11 00:24:34.192815 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-11-11 00:26:44.834867 | orchestrator | Tuesday 11 November 2025 00:24:34 +0000 (0:00:00.287) 0:01:05.946 ****** 2025-11-11 00:26:44.835110 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.835133 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.835145 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.835157 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.835167 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.835179 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.835189 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.835200 | orchestrator | 2025-11-11 00:26:44.835213 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-11-11 00:26:44.835246 | orchestrator | Tuesday 11 November 2025 00:24:35 +0000 (0:00:01.703) 0:01:07.649 ****** 2025-11-11 00:26:44.835257 | orchestrator | changed: [testbed-manager] 2025-11-11 00:26:44.835269 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:26:44.835279 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:26:44.835290 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:26:44.835302 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:26:44.835314 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:26:44.835326 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:26:44.835338 | orchestrator | 2025-11-11 00:26:44.835351 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-11-11 00:26:44.835365 | orchestrator | Tuesday 11 November 2025 00:24:36 +0000 (0:00:00.603) 0:01:08.252 ****** 2025-11-11 00:26:44.835377 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.835389 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.835401 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.835413 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.835425 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.835437 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.835475 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.835487 | orchestrator | 2025-11-11 00:26:44.835499 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-11-11 00:26:44.835511 | orchestrator | Tuesday 11 November 2025 00:24:36 +0000 (0:00:00.244) 0:01:08.497 ****** 2025-11-11 00:26:44.835523 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.835535 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.835547 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.835559 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.835571 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.835583 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.835595 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.835607 | orchestrator | 2025-11-11 00:26:44.835619 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-11-11 00:26:44.835631 | orchestrator | Tuesday 11 November 2025 00:24:37 +0000 (0:00:01.105) 0:01:09.602 ****** 2025-11-11 00:26:44.835643 | orchestrator | changed: [testbed-manager] 2025-11-11 00:26:44.835655 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:26:44.835667 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:26:44.835679 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:26:44.835691 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:26:44.835702 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:26:44.835713 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:26:44.835723 | orchestrator | 2025-11-11 00:26:44.835734 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-11-11 00:26:44.835745 | orchestrator | Tuesday 11 November 2025 00:24:39 +0000 (0:00:01.546) 0:01:11.149 ****** 2025-11-11 00:26:44.835755 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.835766 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.835776 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.835787 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.835798 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.835808 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.835819 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.835829 | orchestrator | 2025-11-11 00:26:44.835840 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-11-11 00:26:44.835851 | orchestrator | Tuesday 11 November 2025 00:24:41 +0000 (0:00:02.136) 0:01:13.285 ****** 2025-11-11 00:26:44.835862 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.835872 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.835883 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.835893 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.835903 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.835914 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.835925 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.835935 | orchestrator | 2025-11-11 00:26:44.835946 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-11-11 00:26:44.835957 | orchestrator | Tuesday 11 November 2025 00:25:10 +0000 (0:00:29.203) 0:01:42.489 ****** 2025-11-11 00:26:44.835968 | orchestrator | changed: [testbed-manager] 2025-11-11 00:26:44.835978 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:26:44.835989 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:26:44.836000 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:26:44.836044 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:26:44.836062 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:26:44.836080 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:26:44.836099 | orchestrator | 2025-11-11 00:26:44.836111 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-11-11 00:26:44.836121 | orchestrator | Tuesday 11 November 2025 00:26:29 +0000 (0:01:19.121) 0:03:01.611 ****** 2025-11-11 00:26:44.836132 | orchestrator | ok: [testbed-manager] 2025-11-11 00:26:44.836143 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.836154 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.836164 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.836175 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.836194 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.836205 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.836216 | orchestrator | 2025-11-11 00:26:44.836227 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-11-11 00:26:44.836238 | orchestrator | Tuesday 11 November 2025 00:26:31 +0000 (0:00:01.770) 0:03:03.382 ****** 2025-11-11 00:26:44.836249 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:26:44.836259 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:26:44.836270 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:26:44.836281 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:26:44.836291 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:26:44.836302 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:26:44.836313 | orchestrator | changed: [testbed-manager] 2025-11-11 00:26:44.836323 | orchestrator | 2025-11-11 00:26:44.836334 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-11-11 00:26:44.836346 | orchestrator | Tuesday 11 November 2025 00:26:43 +0000 (0:00:11.986) 0:03:15.368 ****** 2025-11-11 00:26:44.836396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-11-11 00:26:44.836420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-11-11 00:26:44.836435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-11-11 00:26:44.836448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-11 00:26:44.836460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-11 00:26:44.836471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-11-11 00:26:44.836482 | orchestrator | 2025-11-11 00:26:44.836493 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-11-11 00:26:44.836504 | orchestrator | Tuesday 11 November 2025 00:26:44 +0000 (0:00:00.466) 0:03:15.834 ****** 2025-11-11 00:26:44.836515 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-11 00:26:44.836525 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-11 00:26:44.836543 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:26:44.836554 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-11 00:26:44.836565 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:26:44.836575 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:26:44.836593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-11 00:26:44.836604 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:26:44.836615 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-11 00:26:44.836629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-11 00:26:44.836640 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-11 00:26:44.836651 | orchestrator | 2025-11-11 00:26:44.836662 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-11-11 00:26:44.836672 | orchestrator | Tuesday 11 November 2025 00:26:44 +0000 (0:00:00.593) 0:03:16.428 ****** 2025-11-11 00:26:44.836683 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-11 00:26:44.836695 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-11 00:26:44.836706 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-11 00:26:44.836717 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-11 00:26:44.836727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-11 00:26:44.836745 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-11 00:26:50.199716 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-11 00:26:50.199837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-11 00:26:50.199852 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-11 00:26:50.199865 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-11 00:26:50.199893 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-11 00:26:50.199905 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-11 00:26:50.199916 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-11 00:26:50.199926 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-11 00:26:50.199937 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-11 00:26:50.199948 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:26:50.199960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-11 00:26:50.199971 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-11 00:26:50.199982 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-11 00:26:50.199993 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-11 00:26:50.200061 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-11 00:26:50.200075 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-11 00:26:50.200086 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-11 00:26:50.200121 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-11 00:26:50.200132 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-11 00:26:50.200143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-11 00:26:50.200154 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-11 00:26:50.200166 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:26:50.200177 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-11 00:26:50.200188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-11 00:26:50.200199 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-11 00:26:50.200209 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-11 00:26:50.200220 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:26:50.200231 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-11 00:26:50.200243 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-11 00:26:50.200255 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-11 00:26:50.200267 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-11 00:26:50.200279 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-11 00:26:50.200291 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-11 00:26:50.200303 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-11 00:26:50.200315 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-11 00:26:50.200328 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-11 00:26:50.200339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-11 00:26:50.200351 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:26:50.200363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-11 00:26:50.200375 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-11 00:26:50.200387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-11 00:26:50.200399 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-11 00:26:50.200411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-11 00:26:50.200440 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-11 00:26:50.200454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-11 00:26:50.200466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200508 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-11 00:26:50.200521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-11 00:26:50.200553 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-11 00:26:50.200565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-11 00:26:50.200579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-11 00:26:50.200591 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-11 00:26:50.200613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-11 00:26:50.200624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-11 00:26:50.200635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-11 00:26:50.200646 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-11 00:26:50.200657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-11 00:26:50.200668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-11 00:26:50.200679 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-11 00:26:50.200690 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-11 00:26:50.200701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-11 00:26:50.200713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-11 00:26:50.200723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-11 00:26:50.200734 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-11 00:26:50.200746 | orchestrator | 2025-11-11 00:26:50.200757 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-11-11 00:26:50.200768 | orchestrator | Tuesday 11 November 2025 00:26:49 +0000 (0:00:04.435) 0:03:20.863 ****** 2025-11-11 00:26:50.200779 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200790 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200812 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200834 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200845 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-11 00:26:50.200855 | orchestrator | 2025-11-11 00:26:50.200866 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-11-11 00:26:50.200877 | orchestrator | Tuesday 11 November 2025 00:26:49 +0000 (0:00:00.551) 0:03:21.415 ****** 2025-11-11 00:26:50.200888 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:26:50.200899 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:26:50.200910 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:26:50.200921 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:26:50.200932 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:26:50.200943 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:26:50.200961 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:26:50.200972 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:26:50.200982 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:26:50.200994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:26:50.201034 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:27:02.507123 | orchestrator | 2025-11-11 00:27:02.507263 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-11-11 00:27:02.507281 | orchestrator | Tuesday 11 November 2025 00:26:50 +0000 (0:00:00.544) 0:03:21.959 ****** 2025-11-11 00:27:02.507292 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:27:02.507325 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:27:02.507338 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:27:02.507350 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:27:02.507361 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:27:02.507371 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:27:02.507382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-11 00:27:02.507393 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:27:02.507404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:27:02.507415 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:27:02.507426 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-11 00:27:02.507437 | orchestrator | 2025-11-11 00:27:02.507449 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-11-11 00:27:02.507460 | orchestrator | Tuesday 11 November 2025 00:26:50 +0000 (0:00:00.457) 0:03:22.416 ****** 2025-11-11 00:27:02.507471 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-11 00:27:02.507481 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:27:02.507492 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-11 00:27:02.507503 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:27:02.507514 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-11 00:27:02.507525 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:27:02.507535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-11 00:27:02.507546 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:27:02.507557 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-11 00:27:02.507568 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-11 00:27:02.507579 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-11 00:27:02.507589 | orchestrator | 2025-11-11 00:27:02.507602 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-11-11 00:27:02.507614 | orchestrator | Tuesday 11 November 2025 00:26:51 +0000 (0:00:00.616) 0:03:23.033 ****** 2025-11-11 00:27:02.507626 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:27:02.507639 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:27:02.507651 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:27:02.507664 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:27:02.507700 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:27:02.507713 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:27:02.507725 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:27:02.507738 | orchestrator | 2025-11-11 00:27:02.507751 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-11-11 00:27:02.507763 | orchestrator | Tuesday 11 November 2025 00:26:51 +0000 (0:00:00.312) 0:03:23.346 ****** 2025-11-11 00:27:02.507775 | orchestrator | ok: [testbed-manager] 2025-11-11 00:27:02.507789 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:27:02.507801 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:27:02.507813 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:27:02.507826 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:27:02.507838 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:27:02.507850 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:27:02.507862 | orchestrator | 2025-11-11 00:27:02.507874 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-11-11 00:27:02.507886 | orchestrator | Tuesday 11 November 2025 00:26:57 +0000 (0:00:05.741) 0:03:29.087 ****** 2025-11-11 00:27:02.507898 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-11-11 00:27:02.507911 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-11-11 00:27:02.507923 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:27:02.507936 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-11-11 00:27:02.507948 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:27:02.507959 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-11-11 00:27:02.507970 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:27:02.507981 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-11-11 00:27:02.507991 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:27:02.508023 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-11-11 00:27:02.508036 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:27:02.508046 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:27:02.508057 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-11-11 00:27:02.508068 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:27:02.508079 | orchestrator | 2025-11-11 00:27:02.508090 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-11-11 00:27:02.508101 | orchestrator | Tuesday 11 November 2025 00:26:57 +0000 (0:00:00.275) 0:03:29.362 ****** 2025-11-11 00:27:02.508112 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-11-11 00:27:02.508123 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-11-11 00:27:02.508134 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-11-11 00:27:02.508163 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-11-11 00:27:02.508175 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-11-11 00:27:02.508186 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-11-11 00:27:02.508197 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-11-11 00:27:02.508208 | orchestrator | 2025-11-11 00:27:02.508219 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-11-11 00:27:02.508230 | orchestrator | Tuesday 11 November 2025 00:26:58 +0000 (0:00:00.939) 0:03:30.302 ****** 2025-11-11 00:27:02.508243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:27:02.508257 | orchestrator | 2025-11-11 00:27:02.508268 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-11-11 00:27:02.508279 | orchestrator | Tuesday 11 November 2025 00:26:58 +0000 (0:00:00.373) 0:03:30.676 ****** 2025-11-11 00:27:02.508291 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:27:02.508302 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:27:02.508313 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:27:02.508323 | orchestrator | ok: [testbed-manager] 2025-11-11 00:27:02.508334 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:27:02.508345 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:27:02.508365 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:27:02.508376 | orchestrator | 2025-11-11 00:27:02.508387 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-11-11 00:27:02.508398 | orchestrator | Tuesday 11 November 2025 00:27:00 +0000 (0:00:01.173) 0:03:31.849 ****** 2025-11-11 00:27:02.508408 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:27:02.508419 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:27:02.508430 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:27:02.508449 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:27:02.508461 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:27:02.508472 | orchestrator | ok: [testbed-manager] 2025-11-11 00:27:02.508483 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:27:02.508494 | orchestrator | 2025-11-11 00:27:02.508505 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-11-11 00:27:02.508516 | orchestrator | Tuesday 11 November 2025 00:27:00 +0000 (0:00:00.524) 0:03:32.374 ****** 2025-11-11 00:27:02.508527 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:27:02.508538 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:27:02.508548 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:27:02.508559 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:27:02.508569 | orchestrator | changed: [testbed-manager] 2025-11-11 00:27:02.508580 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:27:02.508590 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:27:02.508601 | orchestrator | 2025-11-11 00:27:02.508612 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-11-11 00:27:02.508623 | orchestrator | Tuesday 11 November 2025 00:27:01 +0000 (0:00:00.540) 0:03:32.914 ****** 2025-11-11 00:27:02.508633 | orchestrator | ok: [testbed-manager] 2025-11-11 00:27:02.508644 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:27:02.508655 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:27:02.508665 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:27:02.508676 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:27:02.508686 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:27:02.508697 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:27:02.508708 | orchestrator | 2025-11-11 00:27:02.508718 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-11-11 00:27:02.508729 | orchestrator | Tuesday 11 November 2025 00:27:01 +0000 (0:00:00.481) 0:03:33.396 ****** 2025-11-11 00:27:02.508744 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819466.1564732, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:02.508759 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819451.8436413, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:02.508771 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819461.2030869, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:02.508816 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819461.4420989, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664321 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819465.889996, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664441 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819430.6495216, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664457 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762819455.665304, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664469 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664481 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664493 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664532 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664589 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664603 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664614 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-11 00:27:06.664626 | orchestrator | 2025-11-11 00:27:06.664639 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-11-11 00:27:06.664652 | orchestrator | Tuesday 11 November 2025 00:27:02 +0000 (0:00:00.866) 0:03:34.262 ****** 2025-11-11 00:27:06.664663 | orchestrator | changed: [testbed-manager] 2025-11-11 00:27:06.664675 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:27:06.664686 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:27:06.664696 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:27:06.664707 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:27:06.664718 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:27:06.664728 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:27:06.664739 | orchestrator | 2025-11-11 00:27:06.664750 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-11-11 00:27:06.664761 | orchestrator | Tuesday 11 November 2025 00:27:03 +0000 (0:00:00.954) 0:03:35.217 ****** 2025-11-11 00:27:06.664771 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:27:06.664782 | orchestrator | changed: [testbed-manager] 2025-11-11 00:27:06.664793 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:27:06.664803 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:27:06.664814 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:27:06.664825 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:27:06.664836 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:27:06.664848 | orchestrator | 2025-11-11 00:27:06.664860 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-11-11 00:27:06.664872 | orchestrator | Tuesday 11 November 2025 00:27:04 +0000 (0:00:00.987) 0:03:36.205 ****** 2025-11-11 00:27:06.664893 | orchestrator | changed: [testbed-manager] 2025-11-11 00:27:06.664905 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:27:06.664918 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:27:06.664930 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:27:06.664942 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:27:06.664954 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:27:06.664966 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:27:06.664978 | orchestrator | 2025-11-11 00:27:06.664990 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-11-11 00:27:06.665046 | orchestrator | Tuesday 11 November 2025 00:27:05 +0000 (0:00:01.026) 0:03:37.231 ****** 2025-11-11 00:27:06.665061 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:27:06.665073 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:27:06.665086 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:27:06.665098 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:27:06.665110 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:27:06.665122 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:27:06.665134 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:27:06.665147 | orchestrator | 2025-11-11 00:27:06.665159 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-11-11 00:27:06.665172 | orchestrator | Tuesday 11 November 2025 00:27:05 +0000 (0:00:00.208) 0:03:37.440 ****** 2025-11-11 00:27:06.665185 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:27:06.665198 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:27:06.665209 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:27:06.665220 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:27:06.665231 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:27:06.665241 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:27:06.665252 | orchestrator | ok: [testbed-manager] 2025-11-11 00:27:06.665263 | orchestrator | 2025-11-11 00:27:06.665274 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-11-11 00:27:06.665285 | orchestrator | Tuesday 11 November 2025 00:27:06 +0000 (0:00:00.610) 0:03:38.050 ****** 2025-11-11 00:27:06.665304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:27:06.665317 | orchestrator | 2025-11-11 00:27:06.665328 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-11-11 00:27:06.665347 | orchestrator | Tuesday 11 November 2025 00:27:06 +0000 (0:00:00.372) 0:03:38.423 ****** 2025-11-11 00:28:22.261519 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.261644 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:22.261659 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:22.261670 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:22.261680 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:22.261690 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:22.261700 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:22.261711 | orchestrator | 2025-11-11 00:28:22.261722 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-11-11 00:28:22.261733 | orchestrator | Tuesday 11 November 2025 00:27:14 +0000 (0:00:07.989) 0:03:46.412 ****** 2025-11-11 00:28:22.261743 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.261753 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.261763 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.261773 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.261783 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.261793 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.261802 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.261812 | orchestrator | 2025-11-11 00:28:22.261822 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-11-11 00:28:22.261832 | orchestrator | Tuesday 11 November 2025 00:27:15 +0000 (0:00:01.231) 0:03:47.644 ****** 2025-11-11 00:28:22.261843 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.261876 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.261886 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.261896 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.261906 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.261915 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.261925 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.261935 | orchestrator | 2025-11-11 00:28:22.261945 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-11-11 00:28:22.261955 | orchestrator | Tuesday 11 November 2025 00:27:16 +0000 (0:00:00.979) 0:03:48.623 ****** 2025-11-11 00:28:22.261964 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.261974 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.262010 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.262078 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.262090 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.262100 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.262111 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.262130 | orchestrator | 2025-11-11 00:28:22.262142 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-11-11 00:28:22.262153 | orchestrator | Tuesday 11 November 2025 00:27:17 +0000 (0:00:00.298) 0:03:48.922 ****** 2025-11-11 00:28:22.262163 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.262174 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.262184 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.262195 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.262205 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.262216 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.262227 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.262237 | orchestrator | 2025-11-11 00:28:22.262248 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-11-11 00:28:22.262259 | orchestrator | Tuesday 11 November 2025 00:27:17 +0000 (0:00:00.275) 0:03:49.197 ****** 2025-11-11 00:28:22.262269 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.262280 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.262290 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.262301 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.262312 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.262322 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.262333 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.262343 | orchestrator | 2025-11-11 00:28:22.262354 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-11-11 00:28:22.262365 | orchestrator | Tuesday 11 November 2025 00:27:17 +0000 (0:00:00.281) 0:03:49.479 ****** 2025-11-11 00:28:22.262376 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.262387 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.262397 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.262408 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.262417 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.262427 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.262436 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.262446 | orchestrator | 2025-11-11 00:28:22.262455 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-11-11 00:28:22.262465 | orchestrator | Tuesday 11 November 2025 00:27:23 +0000 (0:00:05.531) 0:03:55.011 ****** 2025-11-11 00:28:22.262476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:28:22.262489 | orchestrator | 2025-11-11 00:28:22.262499 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-11-11 00:28:22.262509 | orchestrator | Tuesday 11 November 2025 00:27:23 +0000 (0:00:00.419) 0:03:55.430 ****** 2025-11-11 00:28:22.262519 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262529 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-11-11 00:28:22.262548 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262558 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:22.262568 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-11-11 00:28:22.262578 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262587 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-11-11 00:28:22.262597 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:22.262606 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262616 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:22.262626 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-11-11 00:28:22.262635 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:22.262645 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262655 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-11-11 00:28:22.262665 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262675 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-11-11 00:28:22.262701 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:22.262711 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:22.262721 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-11-11 00:28:22.262731 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-11-11 00:28:22.262741 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:22.262750 | orchestrator | 2025-11-11 00:28:22.262760 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-11-11 00:28:22.262770 | orchestrator | Tuesday 11 November 2025 00:27:23 +0000 (0:00:00.301) 0:03:55.731 ****** 2025-11-11 00:28:22.262780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:28:22.262790 | orchestrator | 2025-11-11 00:28:22.262800 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-11-11 00:28:22.262810 | orchestrator | Tuesday 11 November 2025 00:27:24 +0000 (0:00:00.387) 0:03:56.119 ****** 2025-11-11 00:28:22.262836 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-11-11 00:28:22.262846 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:22.262856 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-11-11 00:28:22.262866 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-11-11 00:28:22.262876 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:22.262886 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-11-11 00:28:22.262895 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:22.262905 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-11-11 00:28:22.262915 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:22.262924 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-11-11 00:28:22.262934 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:22.262943 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:22.262953 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-11-11 00:28:22.262962 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:22.262972 | orchestrator | 2025-11-11 00:28:22.263037 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-11-11 00:28:22.263048 | orchestrator | Tuesday 11 November 2025 00:27:24 +0000 (0:00:00.320) 0:03:56.439 ****** 2025-11-11 00:28:22.263058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:28:22.263068 | orchestrator | 2025-11-11 00:28:22.263078 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-11-11 00:28:22.263095 | orchestrator | Tuesday 11 November 2025 00:27:25 +0000 (0:00:00.368) 0:03:56.807 ****** 2025-11-11 00:28:22.263105 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:22.263115 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:22.263125 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:22.263134 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:22.263144 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:22.263153 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:22.263163 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:22.263172 | orchestrator | 2025-11-11 00:28:22.263182 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-11-11 00:28:22.263192 | orchestrator | Tuesday 11 November 2025 00:27:59 +0000 (0:00:34.028) 0:04:30.836 ****** 2025-11-11 00:28:22.263201 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:22.263211 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:22.263220 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:22.263230 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:22.263240 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:22.263249 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:22.263259 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:22.263268 | orchestrator | 2025-11-11 00:28:22.263278 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-11-11 00:28:22.263288 | orchestrator | Tuesday 11 November 2025 00:28:06 +0000 (0:00:07.894) 0:04:38.730 ****** 2025-11-11 00:28:22.263297 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:22.263307 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:22.263316 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:22.263326 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:22.263336 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:22.263345 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:22.263355 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:22.263364 | orchestrator | 2025-11-11 00:28:22.263374 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-11-11 00:28:22.263383 | orchestrator | Tuesday 11 November 2025 00:28:14 +0000 (0:00:07.738) 0:04:46.469 ****** 2025-11-11 00:28:22.263393 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:22.263403 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:22.263412 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:22.263422 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:22.263431 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:22.263441 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:22.263451 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:22.263460 | orchestrator | 2025-11-11 00:28:22.263470 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-11-11 00:28:22.263480 | orchestrator | Tuesday 11 November 2025 00:28:16 +0000 (0:00:01.699) 0:04:48.168 ****** 2025-11-11 00:28:22.263489 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:22.263499 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:22.263514 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:22.263524 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:22.263533 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:22.263543 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:22.263553 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:22.263563 | orchestrator | 2025-11-11 00:28:22.263579 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-11-11 00:28:32.593846 | orchestrator | Tuesday 11 November 2025 00:28:22 +0000 (0:00:05.844) 0:04:54.013 ****** 2025-11-11 00:28:32.593955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:28:32.593963 | orchestrator | 2025-11-11 00:28:32.593968 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-11-11 00:28:32.594063 | orchestrator | Tuesday 11 November 2025 00:28:22 +0000 (0:00:00.394) 0:04:54.408 ****** 2025-11-11 00:28:32.594069 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:32.594074 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:32.594079 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:32.594083 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:32.594086 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:32.594090 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:32.594094 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:32.594098 | orchestrator | 2025-11-11 00:28:32.594102 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-11-11 00:28:32.594106 | orchestrator | Tuesday 11 November 2025 00:28:23 +0000 (0:00:00.710) 0:04:55.118 ****** 2025-11-11 00:28:32.594110 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:32.594115 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:32.594119 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:32.594123 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:32.594126 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:32.594130 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:32.594133 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:32.594137 | orchestrator | 2025-11-11 00:28:32.594141 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-11-11 00:28:32.594145 | orchestrator | Tuesday 11 November 2025 00:28:24 +0000 (0:00:01.534) 0:04:56.653 ****** 2025-11-11 00:28:32.594148 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:28:32.594152 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:28:32.594156 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:28:32.594159 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:28:32.594163 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:28:32.594172 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:28:32.594177 | orchestrator | changed: [testbed-manager] 2025-11-11 00:28:32.594180 | orchestrator | 2025-11-11 00:28:32.594184 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-11-11 00:28:32.594188 | orchestrator | Tuesday 11 November 2025 00:28:25 +0000 (0:00:00.753) 0:04:57.406 ****** 2025-11-11 00:28:32.594192 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594195 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594199 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594203 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:32.594206 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:32.594210 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:32.594214 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:32.594217 | orchestrator | 2025-11-11 00:28:32.594221 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-11-11 00:28:32.594225 | orchestrator | Tuesday 11 November 2025 00:28:25 +0000 (0:00:00.254) 0:04:57.661 ****** 2025-11-11 00:28:32.594228 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594232 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594236 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594239 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:32.594243 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:32.594246 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:32.594250 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:32.594254 | orchestrator | 2025-11-11 00:28:32.594257 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-11-11 00:28:32.594261 | orchestrator | Tuesday 11 November 2025 00:28:26 +0000 (0:00:00.360) 0:04:58.021 ****** 2025-11-11 00:28:32.594265 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:32.594268 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:32.594272 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:32.594276 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:32.594279 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:32.594283 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:32.594287 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:32.594295 | orchestrator | 2025-11-11 00:28:32.594299 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-11-11 00:28:32.594303 | orchestrator | Tuesday 11 November 2025 00:28:26 +0000 (0:00:00.289) 0:04:58.310 ****** 2025-11-11 00:28:32.594306 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594310 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594314 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594318 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:32.594321 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:32.594325 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:32.594329 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:32.594332 | orchestrator | 2025-11-11 00:28:32.594336 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-11-11 00:28:32.594341 | orchestrator | Tuesday 11 November 2025 00:28:26 +0000 (0:00:00.264) 0:04:58.575 ****** 2025-11-11 00:28:32.594345 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:32.594349 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:32.594352 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:32.594356 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:32.594360 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:32.594363 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:32.594367 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:32.594371 | orchestrator | 2025-11-11 00:28:32.594374 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-11-11 00:28:32.594378 | orchestrator | Tuesday 11 November 2025 00:28:27 +0000 (0:00:00.300) 0:04:58.875 ****** 2025-11-11 00:28:32.594382 | orchestrator | ok: [testbed-node-0] =>  2025-11-11 00:28:32.594399 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594403 | orchestrator | ok: [testbed-node-1] =>  2025-11-11 00:28:32.594407 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594411 | orchestrator | ok: [testbed-node-2] =>  2025-11-11 00:28:32.594415 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594419 | orchestrator | ok: [testbed-node-3] =>  2025-11-11 00:28:32.594423 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594439 | orchestrator | ok: [testbed-node-4] =>  2025-11-11 00:28:32.594443 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594447 | orchestrator | ok: [testbed-node-5] =>  2025-11-11 00:28:32.594452 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594456 | orchestrator | ok: [testbed-manager] =>  2025-11-11 00:28:32.594460 | orchestrator |  docker_version: 5:27.5.1 2025-11-11 00:28:32.594464 | orchestrator | 2025-11-11 00:28:32.594468 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-11-11 00:28:32.594472 | orchestrator | Tuesday 11 November 2025 00:28:27 +0000 (0:00:00.257) 0:04:59.133 ****** 2025-11-11 00:28:32.594476 | orchestrator | ok: [testbed-node-0] =>  2025-11-11 00:28:32.594480 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594484 | orchestrator | ok: [testbed-node-1] =>  2025-11-11 00:28:32.594489 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594493 | orchestrator | ok: [testbed-node-2] =>  2025-11-11 00:28:32.594497 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594501 | orchestrator | ok: [testbed-node-3] =>  2025-11-11 00:28:32.594504 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594510 | orchestrator | ok: [testbed-node-4] =>  2025-11-11 00:28:32.594517 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594522 | orchestrator | ok: [testbed-node-5] =>  2025-11-11 00:28:32.594529 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594535 | orchestrator | ok: [testbed-manager] =>  2025-11-11 00:28:32.594541 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-11 00:28:32.594546 | orchestrator | 2025-11-11 00:28:32.594552 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-11-11 00:28:32.594558 | orchestrator | Tuesday 11 November 2025 00:28:27 +0000 (0:00:00.272) 0:04:59.405 ****** 2025-11-11 00:28:32.594565 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594573 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594578 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594582 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:32.594586 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:32.594590 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:32.594594 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:32.594598 | orchestrator | 2025-11-11 00:28:32.594602 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-11-11 00:28:32.594607 | orchestrator | Tuesday 11 November 2025 00:28:28 +0000 (0:00:00.365) 0:04:59.770 ****** 2025-11-11 00:28:32.594611 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594615 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594620 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594624 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:28:32.594629 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:28:32.594633 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:28:32.594637 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:28:32.594641 | orchestrator | 2025-11-11 00:28:32.594645 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-11-11 00:28:32.594649 | orchestrator | Tuesday 11 November 2025 00:28:28 +0000 (0:00:00.245) 0:05:00.016 ****** 2025-11-11 00:28:32.594654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:28:32.594660 | orchestrator | 2025-11-11 00:28:32.594664 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-11-11 00:28:32.594669 | orchestrator | Tuesday 11 November 2025 00:28:28 +0000 (0:00:00.410) 0:05:00.426 ****** 2025-11-11 00:28:32.594673 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:32.594677 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:32.594681 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:32.594685 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:32.594689 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:32.594693 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:32.594697 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:32.594701 | orchestrator | 2025-11-11 00:28:32.594705 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-11-11 00:28:32.594710 | orchestrator | Tuesday 11 November 2025 00:28:29 +0000 (0:00:00.796) 0:05:01.222 ****** 2025-11-11 00:28:32.594714 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:28:32.594718 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:28:32.594722 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:28:32.594726 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:28:32.594730 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:28:32.594734 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:28:32.594738 | orchestrator | ok: [testbed-manager] 2025-11-11 00:28:32.594743 | orchestrator | 2025-11-11 00:28:32.594747 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-11-11 00:28:32.594753 | orchestrator | Tuesday 11 November 2025 00:28:32 +0000 (0:00:02.773) 0:05:03.996 ****** 2025-11-11 00:28:32.594757 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-11-11 00:28:32.594761 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-11-11 00:28:32.594764 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-11-11 00:28:32.594768 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-11-11 00:28:32.594772 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-11-11 00:28:32.594775 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-11-11 00:28:32.594779 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:28:32.594783 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-11-11 00:28:32.594786 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-11-11 00:28:32.594794 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-11-11 00:28:32.594797 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:28:32.594804 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-11-11 00:28:32.594808 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-11-11 00:28:32.594812 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-11-11 00:28:32.594815 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:28:32.594819 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-11-11 00:28:32.594826 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-11-11 00:29:31.949643 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-11-11 00:29:31.949785 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:31.949804 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-11-11 00:29:31.949817 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-11-11 00:29:31.949829 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-11-11 00:29:31.949841 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:31.949854 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:31.949865 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-11-11 00:29:31.949877 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-11-11 00:29:31.949888 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-11-11 00:29:31.949900 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:31.949911 | orchestrator | 2025-11-11 00:29:31.949923 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-11-11 00:29:31.949998 | orchestrator | Tuesday 11 November 2025 00:28:33 +0000 (0:00:00.854) 0:05:04.851 ****** 2025-11-11 00:29:31.950072 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950089 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950100 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950111 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950123 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950135 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950146 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950158 | orchestrator | 2025-11-11 00:29:31.950171 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-11-11 00:29:31.950184 | orchestrator | Tuesday 11 November 2025 00:28:39 +0000 (0:00:06.408) 0:05:11.259 ****** 2025-11-11 00:29:31.950197 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950209 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950222 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950234 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950246 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950256 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950267 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950278 | orchestrator | 2025-11-11 00:29:31.950290 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-11-11 00:29:31.950302 | orchestrator | Tuesday 11 November 2025 00:28:40 +0000 (0:00:01.069) 0:05:12.329 ****** 2025-11-11 00:29:31.950313 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950325 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950337 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950348 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950359 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950370 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950382 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950392 | orchestrator | 2025-11-11 00:29:31.950403 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-11-11 00:29:31.950414 | orchestrator | Tuesday 11 November 2025 00:28:48 +0000 (0:00:07.727) 0:05:20.057 ****** 2025-11-11 00:29:31.950425 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950435 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950446 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950488 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950499 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950510 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950521 | orchestrator | changed: [testbed-manager] 2025-11-11 00:29:31.950531 | orchestrator | 2025-11-11 00:29:31.950541 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-11-11 00:29:31.950552 | orchestrator | Tuesday 11 November 2025 00:28:51 +0000 (0:00:03.343) 0:05:23.401 ****** 2025-11-11 00:29:31.950561 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950571 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950582 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950591 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950601 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950610 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950620 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950630 | orchestrator | 2025-11-11 00:29:31.950640 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-11-11 00:29:31.950651 | orchestrator | Tuesday 11 November 2025 00:28:53 +0000 (0:00:01.559) 0:05:24.961 ****** 2025-11-11 00:29:31.950661 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950670 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950681 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950691 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950701 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950712 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950721 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950731 | orchestrator | 2025-11-11 00:29:31.950740 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-11-11 00:29:31.950750 | orchestrator | Tuesday 11 November 2025 00:28:54 +0000 (0:00:01.439) 0:05:26.400 ****** 2025-11-11 00:29:31.950760 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:31.950771 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:31.950781 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:31.950791 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:31.950801 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:31.950811 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:31.950821 | orchestrator | changed: [testbed-manager] 2025-11-11 00:29:31.950832 | orchestrator | 2025-11-11 00:29:31.950842 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-11-11 00:29:31.950852 | orchestrator | Tuesday 11 November 2025 00:28:55 +0000 (0:00:01.020) 0:05:27.421 ****** 2025-11-11 00:29:31.950862 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.950871 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950877 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950883 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950890 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950896 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.950902 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.950909 | orchestrator | 2025-11-11 00:29:31.950915 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-11-11 00:29:31.950961 | orchestrator | Tuesday 11 November 2025 00:29:04 +0000 (0:00:08.823) 0:05:36.244 ****** 2025-11-11 00:29:31.950970 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.950976 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.950982 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.950988 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.950995 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.951001 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.951007 | orchestrator | changed: [testbed-manager] 2025-11-11 00:29:31.951013 | orchestrator | 2025-11-11 00:29:31.951019 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-11-11 00:29:31.951026 | orchestrator | Tuesday 11 November 2025 00:29:05 +0000 (0:00:00.895) 0:05:37.140 ****** 2025-11-11 00:29:31.951041 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.951048 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.951054 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.951060 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.951066 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.951072 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.951078 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.951084 | orchestrator | 2025-11-11 00:29:31.951090 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-11-11 00:29:31.951097 | orchestrator | Tuesday 11 November 2025 00:29:14 +0000 (0:00:09.151) 0:05:46.292 ****** 2025-11-11 00:29:31.951103 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.951109 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.951115 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.951121 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.951127 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.951134 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.951140 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.951146 | orchestrator | 2025-11-11 00:29:31.951152 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-11-11 00:29:31.951158 | orchestrator | Tuesday 11 November 2025 00:29:25 +0000 (0:00:11.006) 0:05:57.299 ****** 2025-11-11 00:29:31.951164 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-11-11 00:29:31.951171 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-11-11 00:29:31.951177 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-11-11 00:29:31.951183 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-11-11 00:29:31.951189 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-11-11 00:29:31.951195 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-11-11 00:29:31.951201 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-11-11 00:29:31.951208 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-11-11 00:29:31.951214 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-11-11 00:29:31.951220 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-11-11 00:29:31.951226 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-11-11 00:29:31.951232 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-11-11 00:29:31.951239 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-11-11 00:29:31.951245 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-11-11 00:29:31.951251 | orchestrator | 2025-11-11 00:29:31.951257 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-11-11 00:29:31.951264 | orchestrator | Tuesday 11 November 2025 00:29:26 +0000 (0:00:01.179) 0:05:58.479 ****** 2025-11-11 00:29:31.951270 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:31.951276 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:31.951282 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:31.951288 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:31.951294 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:31.951300 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:31.951306 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:31.951313 | orchestrator | 2025-11-11 00:29:31.951319 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-11-11 00:29:31.951325 | orchestrator | Tuesday 11 November 2025 00:29:27 +0000 (0:00:00.517) 0:05:58.996 ****** 2025-11-11 00:29:31.951331 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:31.951337 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:31.951344 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:31.951350 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:31.951356 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:31.951362 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:31.951368 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:31.951374 | orchestrator | 2025-11-11 00:29:31.951385 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-11-11 00:29:31.951440 | orchestrator | Tuesday 11 November 2025 00:29:30 +0000 (0:00:03.674) 0:06:02.671 ****** 2025-11-11 00:29:31.951447 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:31.951454 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:31.951460 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:31.951466 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:31.951472 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:31.951478 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:31.951484 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:31.951490 | orchestrator | 2025-11-11 00:29:31.951498 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-11-11 00:29:31.951504 | orchestrator | Tuesday 11 November 2025 00:29:31 +0000 (0:00:00.693) 0:06:03.364 ****** 2025-11-11 00:29:31.951511 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-11-11 00:29:31.951517 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-11-11 00:29:31.951523 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:31.951529 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-11-11 00:29:31.951538 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-11-11 00:29:31.951545 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:31.951551 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-11-11 00:29:31.951557 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-11-11 00:29:31.951563 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:31.951575 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-11-11 00:29:50.831010 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-11-11 00:29:50.831141 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:50.831156 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-11-11 00:29:50.831167 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-11-11 00:29:50.831177 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:50.831187 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-11-11 00:29:50.831196 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-11-11 00:29:50.831206 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:50.831216 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-11-11 00:29:50.831226 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-11-11 00:29:50.831236 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:50.831245 | orchestrator | 2025-11-11 00:29:50.831256 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-11-11 00:29:50.831267 | orchestrator | Tuesday 11 November 2025 00:29:32 +0000 (0:00:00.619) 0:06:03.984 ****** 2025-11-11 00:29:50.831277 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:50.831287 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:50.831296 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:50.831306 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:50.831315 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:50.831325 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:50.831334 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:50.831344 | orchestrator | 2025-11-11 00:29:50.831354 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-11-11 00:29:50.831363 | orchestrator | Tuesday 11 November 2025 00:29:32 +0000 (0:00:00.567) 0:06:04.552 ****** 2025-11-11 00:29:50.831373 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:50.831383 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:50.831392 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:50.831402 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:50.831411 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:50.831448 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:50.831458 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:50.831467 | orchestrator | 2025-11-11 00:29:50.831477 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-11-11 00:29:50.831486 | orchestrator | Tuesday 11 November 2025 00:29:33 +0000 (0:00:00.536) 0:06:05.088 ****** 2025-11-11 00:29:50.831496 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:29:50.831506 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:29:50.831517 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:29:50.831528 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:29:50.831538 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:29:50.831549 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:29:50.831560 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:50.831570 | orchestrator | 2025-11-11 00:29:50.831581 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-11-11 00:29:50.831592 | orchestrator | Tuesday 11 November 2025 00:29:34 +0000 (0:00:00.798) 0:06:05.887 ****** 2025-11-11 00:29:50.831603 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.831614 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.831624 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.831635 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.831645 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.831656 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.831667 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.831677 | orchestrator | 2025-11-11 00:29:50.831688 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-11-11 00:29:50.831699 | orchestrator | Tuesday 11 November 2025 00:29:35 +0000 (0:00:01.676) 0:06:07.564 ****** 2025-11-11 00:29:50.831710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:29:50.831723 | orchestrator | 2025-11-11 00:29:50.831734 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-11-11 00:29:50.831745 | orchestrator | Tuesday 11 November 2025 00:29:36 +0000 (0:00:00.809) 0:06:08.373 ****** 2025-11-11 00:29:50.831756 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:50.831766 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:50.831777 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:50.831787 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:50.831798 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:50.831809 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:50.831820 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.831830 | orchestrator | 2025-11-11 00:29:50.831841 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-11-11 00:29:50.831852 | orchestrator | Tuesday 11 November 2025 00:29:37 +0000 (0:00:00.811) 0:06:09.184 ****** 2025-11-11 00:29:50.831862 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:50.831872 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:50.831881 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:50.831891 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:50.831900 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:50.831909 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:50.831919 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.831955 | orchestrator | 2025-11-11 00:29:50.831966 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-11-11 00:29:50.831975 | orchestrator | Tuesday 11 November 2025 00:29:38 +0000 (0:00:01.141) 0:06:10.326 ****** 2025-11-11 00:29:50.831985 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:50.831994 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:50.832005 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:50.832014 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:50.832024 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:50.832033 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:50.832050 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832060 | orchestrator | 2025-11-11 00:29:50.832070 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-11-11 00:29:50.832096 | orchestrator | Tuesday 11 November 2025 00:29:39 +0000 (0:00:01.253) 0:06:11.579 ****** 2025-11-11 00:29:50.832106 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:29:50.832116 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.832126 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.832136 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.832145 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.832155 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.832165 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.832174 | orchestrator | 2025-11-11 00:29:50.832184 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-11-11 00:29:50.832194 | orchestrator | Tuesday 11 November 2025 00:29:41 +0000 (0:00:01.242) 0:06:12.822 ****** 2025-11-11 00:29:50.832204 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:50.832213 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:50.832223 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:50.832233 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:50.832242 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:50.832252 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832261 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:50.832271 | orchestrator | 2025-11-11 00:29:50.832281 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-11-11 00:29:50.832291 | orchestrator | Tuesday 11 November 2025 00:29:42 +0000 (0:00:01.238) 0:06:14.061 ****** 2025-11-11 00:29:50.832300 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:29:50.832310 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:29:50.832319 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:29:50.832329 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:29:50.832339 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:29:50.832348 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:29:50.832358 | orchestrator | changed: [testbed-manager] 2025-11-11 00:29:50.832368 | orchestrator | 2025-11-11 00:29:50.832378 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-11-11 00:29:50.832387 | orchestrator | Tuesday 11 November 2025 00:29:43 +0000 (0:00:01.342) 0:06:15.404 ****** 2025-11-11 00:29:50.832397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:29:50.832407 | orchestrator | 2025-11-11 00:29:50.832417 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-11-11 00:29:50.832427 | orchestrator | Tuesday 11 November 2025 00:29:44 +0000 (0:00:01.136) 0:06:16.540 ****** 2025-11-11 00:29:50.832436 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.832446 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.832456 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.832466 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.832475 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.832485 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.832494 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832504 | orchestrator | 2025-11-11 00:29:50.832514 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-11-11 00:29:50.832524 | orchestrator | Tuesday 11 November 2025 00:29:46 +0000 (0:00:01.390) 0:06:17.930 ****** 2025-11-11 00:29:50.832534 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.832543 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.832553 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.832563 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.832572 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.832582 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.832591 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832601 | orchestrator | 2025-11-11 00:29:50.832611 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-11-11 00:29:50.832635 | orchestrator | Tuesday 11 November 2025 00:29:47 +0000 (0:00:01.143) 0:06:19.074 ****** 2025-11-11 00:29:50.832645 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.832655 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.832665 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.832674 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.832684 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.832693 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.832703 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832713 | orchestrator | 2025-11-11 00:29:50.832723 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-11-11 00:29:50.832733 | orchestrator | Tuesday 11 November 2025 00:29:48 +0000 (0:00:01.294) 0:06:20.368 ****** 2025-11-11 00:29:50.832742 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:29:50.832752 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:29:50.832762 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:29:50.832771 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:29:50.832781 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:29:50.832790 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:29:50.832800 | orchestrator | ok: [testbed-manager] 2025-11-11 00:29:50.832810 | orchestrator | 2025-11-11 00:29:50.832820 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-11-11 00:29:50.832830 | orchestrator | Tuesday 11 November 2025 00:29:49 +0000 (0:00:01.083) 0:06:21.452 ****** 2025-11-11 00:29:50.832839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:29:50.832849 | orchestrator | 2025-11-11 00:29:50.832859 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:29:50.832869 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.843) 0:06:22.295 ****** 2025-11-11 00:29:50.832879 | orchestrator | 2025-11-11 00:29:50.832888 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:29:50.832898 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.039) 0:06:22.335 ****** 2025-11-11 00:29:50.832908 | orchestrator | 2025-11-11 00:29:50.832951 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:29:50.832962 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.044) 0:06:22.380 ****** 2025-11-11 00:29:50.832972 | orchestrator | 2025-11-11 00:29:50.832981 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:29:50.832997 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.037) 0:06:22.418 ****** 2025-11-11 00:30:14.371298 | orchestrator | 2025-11-11 00:30:14.371406 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:30:14.371421 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.036) 0:06:22.455 ****** 2025-11-11 00:30:14.371431 | orchestrator | 2025-11-11 00:30:14.371441 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:30:14.371451 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.042) 0:06:22.498 ****** 2025-11-11 00:30:14.371461 | orchestrator | 2025-11-11 00:30:14.371470 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-11 00:30:14.371480 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.044) 0:06:22.543 ****** 2025-11-11 00:30:14.371489 | orchestrator | 2025-11-11 00:30:14.371498 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-11 00:30:14.371508 | orchestrator | Tuesday 11 November 2025 00:29:50 +0000 (0:00:00.038) 0:06:22.581 ****** 2025-11-11 00:30:14.371517 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:14.371528 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:14.371537 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:14.371547 | orchestrator | 2025-11-11 00:30:14.371556 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-11-11 00:30:14.371589 | orchestrator | Tuesday 11 November 2025 00:29:51 +0000 (0:00:01.073) 0:06:23.654 ****** 2025-11-11 00:30:14.371599 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:14.371609 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:14.371618 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:14.371628 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:14.371637 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:14.371647 | orchestrator | changed: [testbed-manager] 2025-11-11 00:30:14.371656 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:14.371666 | orchestrator | 2025-11-11 00:30:14.371675 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2025-11-11 00:30:14.371685 | orchestrator | Tuesday 11 November 2025 00:29:53 +0000 (0:00:01.604) 0:06:25.259 ****** 2025-11-11 00:30:14.371694 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:14.371703 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:14.371713 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:14.371722 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:14.371731 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:14.371741 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:14.371750 | orchestrator | changed: [testbed-manager] 2025-11-11 00:30:14.371759 | orchestrator | 2025-11-11 00:30:14.371769 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-11-11 00:30:14.371778 | orchestrator | Tuesday 11 November 2025 00:29:54 +0000 (0:00:01.212) 0:06:26.471 ****** 2025-11-11 00:30:14.371788 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:14.371798 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:14.371807 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:14.371816 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:14.371826 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:14.371835 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:14.371845 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:14.371855 | orchestrator | 2025-11-11 00:30:14.371866 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-11-11 00:30:14.371877 | orchestrator | Tuesday 11 November 2025 00:29:56 +0000 (0:00:02.196) 0:06:28.668 ****** 2025-11-11 00:30:14.371887 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:14.371898 | orchestrator | 2025-11-11 00:30:14.371908 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-11-11 00:30:14.371939 | orchestrator | Tuesday 11 November 2025 00:29:56 +0000 (0:00:00.087) 0:06:28.755 ****** 2025-11-11 00:30:14.371950 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:14.371961 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:14.371972 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:14.371982 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:14.371993 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:14.372003 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:14.372012 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.372022 | orchestrator | 2025-11-11 00:30:14.372031 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-11-11 00:30:14.372041 | orchestrator | Tuesday 11 November 2025 00:29:57 +0000 (0:00:00.940) 0:06:29.695 ****** 2025-11-11 00:30:14.372051 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:14.372060 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:14.372069 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:14.372079 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:14.372088 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:14.372097 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:14.372107 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:14.372116 | orchestrator | 2025-11-11 00:30:14.372126 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-11-11 00:30:14.372135 | orchestrator | Tuesday 11 November 2025 00:29:58 +0000 (0:00:00.668) 0:06:30.364 ****** 2025-11-11 00:30:14.372146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:30:14.372164 | orchestrator | 2025-11-11 00:30:14.372174 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-11-11 00:30:14.372184 | orchestrator | Tuesday 11 November 2025 00:29:59 +0000 (0:00:00.843) 0:06:31.208 ****** 2025-11-11 00:30:14.372194 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:14.372203 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:14.372213 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:14.372222 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:14.372246 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:14.372256 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:14.372265 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.372275 | orchestrator | 2025-11-11 00:30:14.372285 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-11-11 00:30:14.372294 | orchestrator | Tuesday 11 November 2025 00:30:00 +0000 (0:00:00.803) 0:06:32.011 ****** 2025-11-11 00:30:14.372304 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-11-11 00:30:14.372329 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-11-11 00:30:14.372339 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-11-11 00:30:14.372348 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-11-11 00:30:14.372358 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-11-11 00:30:14.372367 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-11-11 00:30:14.372377 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-11-11 00:30:14.372386 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-11-11 00:30:14.372396 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-11-11 00:30:14.372405 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-11-11 00:30:14.372415 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-11-11 00:30:14.372425 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-11-11 00:30:14.372434 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-11-11 00:30:14.372444 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-11-11 00:30:14.372453 | orchestrator | 2025-11-11 00:30:14.372463 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-11-11 00:30:14.372472 | orchestrator | Tuesday 11 November 2025 00:30:02 +0000 (0:00:02.498) 0:06:34.509 ****** 2025-11-11 00:30:14.372482 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:14.372491 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:14.372500 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:14.372510 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:14.372519 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:14.372529 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:14.372538 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:14.372547 | orchestrator | 2025-11-11 00:30:14.372557 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-11-11 00:30:14.372567 | orchestrator | Tuesday 11 November 2025 00:30:03 +0000 (0:00:00.484) 0:06:34.994 ****** 2025-11-11 00:30:14.372577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:30:14.372588 | orchestrator | 2025-11-11 00:30:14.372598 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-11-11 00:30:14.372607 | orchestrator | Tuesday 11 November 2025 00:30:04 +0000 (0:00:00.783) 0:06:35.777 ****** 2025-11-11 00:30:14.372617 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:14.372626 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:14.372636 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:14.372651 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:14.372661 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:14.372670 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:14.372680 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.372689 | orchestrator | 2025-11-11 00:30:14.372699 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-11-11 00:30:14.372708 | orchestrator | Tuesday 11 November 2025 00:30:04 +0000 (0:00:00.947) 0:06:36.725 ****** 2025-11-11 00:30:14.372718 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:14.372727 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:14.372736 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:14.372757 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:14.372767 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:14.372776 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:14.372786 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.372795 | orchestrator | 2025-11-11 00:30:14.372805 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-11-11 00:30:14.372815 | orchestrator | Tuesday 11 November 2025 00:30:05 +0000 (0:00:00.784) 0:06:37.509 ****** 2025-11-11 00:30:14.372824 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:14.372834 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:14.372844 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:14.372853 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:14.372863 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:14.372872 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:14.372882 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:14.372891 | orchestrator | 2025-11-11 00:30:14.372901 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-11-11 00:30:14.372923 | orchestrator | Tuesday 11 November 2025 00:30:06 +0000 (0:00:00.492) 0:06:38.002 ****** 2025-11-11 00:30:14.372933 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:14.372942 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:14.372952 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:14.372962 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:14.372971 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:14.372981 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:14.372990 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.373000 | orchestrator | 2025-11-11 00:30:14.373010 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-11-11 00:30:14.373019 | orchestrator | Tuesday 11 November 2025 00:30:07 +0000 (0:00:01.360) 0:06:39.362 ****** 2025-11-11 00:30:14.373029 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:14.373039 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:14.373048 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:14.373058 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:14.373067 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:14.373077 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:14.373086 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:14.373096 | orchestrator | 2025-11-11 00:30:14.373106 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-11-11 00:30:14.373116 | orchestrator | Tuesday 11 November 2025 00:30:08 +0000 (0:00:00.470) 0:06:39.833 ****** 2025-11-11 00:30:14.373126 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:14.373135 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:14.373145 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:14.373155 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:14.373164 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:14.373174 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:14.373189 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:46.238856 | orchestrator | 2025-11-11 00:30:46.239045 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-11-11 00:30:46.239064 | orchestrator | Tuesday 11 November 2025 00:30:14 +0000 (0:00:06.287) 0:06:46.121 ****** 2025-11-11 00:30:46.239076 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:46.239118 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:46.239130 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:46.239141 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:46.239152 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:46.239163 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:46.239174 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.239186 | orchestrator | 2025-11-11 00:30:46.239197 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-11-11 00:30:46.239208 | orchestrator | Tuesday 11 November 2025 00:30:15 +0000 (0:00:01.277) 0:06:47.398 ****** 2025-11-11 00:30:46.239219 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:46.239230 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:46.239241 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:46.239251 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:46.239262 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.239273 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:46.239284 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:46.239295 | orchestrator | 2025-11-11 00:30:46.239305 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-11-11 00:30:46.239316 | orchestrator | Tuesday 11 November 2025 00:30:17 +0000 (0:00:01.474) 0:06:48.873 ****** 2025-11-11 00:30:46.239327 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:46.239338 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:46.239349 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:46.239359 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:46.239370 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:46.239382 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:46.239394 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.239407 | orchestrator | 2025-11-11 00:30:46.239419 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-11 00:30:46.239432 | orchestrator | Tuesday 11 November 2025 00:30:18 +0000 (0:00:01.447) 0:06:50.321 ****** 2025-11-11 00:30:46.239444 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.239456 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.239468 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.239480 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.239492 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.239504 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.239516 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.239528 | orchestrator | 2025-11-11 00:30:46.239540 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-11 00:30:46.239553 | orchestrator | Tuesday 11 November 2025 00:30:19 +0000 (0:00:01.059) 0:06:51.381 ****** 2025-11-11 00:30:46.239565 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:46.239577 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:46.239589 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:46.239602 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:46.239614 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:46.239626 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:46.239638 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:46.239650 | orchestrator | 2025-11-11 00:30:46.239662 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-11-11 00:30:46.239674 | orchestrator | Tuesday 11 November 2025 00:30:20 +0000 (0:00:00.774) 0:06:52.156 ****** 2025-11-11 00:30:46.239687 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:46.239699 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:46.239711 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:46.239723 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:46.239735 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:46.239745 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:46.239756 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:46.239767 | orchestrator | 2025-11-11 00:30:46.239778 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-11-11 00:30:46.239790 | orchestrator | Tuesday 11 November 2025 00:30:20 +0000 (0:00:00.512) 0:06:52.668 ****** 2025-11-11 00:30:46.239809 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.239820 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.239831 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.239842 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.239853 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.239863 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.239874 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.239884 | orchestrator | 2025-11-11 00:30:46.239926 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-11-11 00:30:46.239938 | orchestrator | Tuesday 11 November 2025 00:30:21 +0000 (0:00:00.513) 0:06:53.181 ****** 2025-11-11 00:30:46.239949 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.239960 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.239970 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.239981 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.239991 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240002 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240012 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240023 | orchestrator | 2025-11-11 00:30:46.240034 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-11-11 00:30:46.240045 | orchestrator | Tuesday 11 November 2025 00:30:22 +0000 (0:00:00.707) 0:06:53.888 ****** 2025-11-11 00:30:46.240055 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.240066 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.240097 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.240109 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.240119 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240130 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240141 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240152 | orchestrator | 2025-11-11 00:30:46.240163 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-11-11 00:30:46.240178 | orchestrator | Tuesday 11 November 2025 00:30:22 +0000 (0:00:00.542) 0:06:54.431 ****** 2025-11-11 00:30:46.240190 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240200 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.240211 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.240222 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.240232 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.240243 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240254 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240264 | orchestrator | 2025-11-11 00:30:46.240294 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-11-11 00:30:46.240306 | orchestrator | Tuesday 11 November 2025 00:30:28 +0000 (0:00:05.523) 0:06:59.954 ****** 2025-11-11 00:30:46.240317 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:30:46.240328 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:30:46.240339 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:30:46.240350 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:30:46.240361 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:30:46.240371 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:30:46.240382 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:30:46.240393 | orchestrator | 2025-11-11 00:30:46.240404 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-11-11 00:30:46.240415 | orchestrator | Tuesday 11 November 2025 00:30:28 +0000 (0:00:00.520) 0:07:00.474 ****** 2025-11-11 00:30:46.240427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:30:46.240440 | orchestrator | 2025-11-11 00:30:46.240452 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-11-11 00:30:46.240462 | orchestrator | Tuesday 11 November 2025 00:30:29 +0000 (0:00:00.983) 0:07:01.457 ****** 2025-11-11 00:30:46.240473 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.240492 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.240503 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.240513 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.240524 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240535 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240546 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240556 | orchestrator | 2025-11-11 00:30:46.240567 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-11-11 00:30:46.240578 | orchestrator | Tuesday 11 November 2025 00:30:31 +0000 (0:00:01.733) 0:07:03.191 ****** 2025-11-11 00:30:46.240588 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.240599 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.240610 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.240620 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.240631 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240641 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240652 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240663 | orchestrator | 2025-11-11 00:30:46.240673 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-11-11 00:30:46.240684 | orchestrator | Tuesday 11 November 2025 00:30:32 +0000 (0:00:01.108) 0:07:04.299 ****** 2025-11-11 00:30:46.240695 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:30:46.240705 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:30:46.240716 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:30:46.240727 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:30:46.240737 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:30:46.240748 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:30:46.240759 | orchestrator | ok: [testbed-manager] 2025-11-11 00:30:46.240770 | orchestrator | 2025-11-11 00:30:46.240781 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-11-11 00:30:46.240791 | orchestrator | Tuesday 11 November 2025 00:30:33 +0000 (0:00:00.820) 0:07:05.120 ****** 2025-11-11 00:30:46.240803 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240816 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240827 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240838 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240849 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240860 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240870 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-11 00:30:46.240881 | orchestrator | 2025-11-11 00:30:46.240919 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-11-11 00:30:46.240932 | orchestrator | Tuesday 11 November 2025 00:30:35 +0000 (0:00:01.885) 0:07:07.005 ****** 2025-11-11 00:30:46.240943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:30:46.240954 | orchestrator | 2025-11-11 00:30:46.240965 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-11-11 00:30:46.240976 | orchestrator | Tuesday 11 November 2025 00:30:35 +0000 (0:00:00.747) 0:07:07.752 ****** 2025-11-11 00:30:46.240992 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:30:46.241003 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:30:46.241025 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:30:46.241036 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:30:46.241047 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:30:46.241058 | orchestrator | changed: [testbed-manager] 2025-11-11 00:30:46.241068 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:30:46.241079 | orchestrator | 2025-11-11 00:30:46.241096 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-11-11 00:31:16.685478 | orchestrator | Tuesday 11 November 2025 00:30:46 +0000 (0:00:10.237) 0:07:17.990 ****** 2025-11-11 00:31:16.685635 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:16.685661 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:16.685677 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:16.685693 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:16.685708 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:16.685725 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:16.685741 | orchestrator | ok: [testbed-manager] 2025-11-11 00:31:16.685758 | orchestrator | 2025-11-11 00:31:16.685776 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-11-11 00:31:16.685792 | orchestrator | Tuesday 11 November 2025 00:30:48 +0000 (0:00:01.906) 0:07:19.897 ****** 2025-11-11 00:31:16.685808 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:16.685824 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:16.685841 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:16.685857 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:16.685906 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:16.685924 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:16.685941 | orchestrator | 2025-11-11 00:31:16.685959 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-11-11 00:31:16.685977 | orchestrator | Tuesday 11 November 2025 00:30:49 +0000 (0:00:01.293) 0:07:21.190 ****** 2025-11-11 00:31:16.685996 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.686015 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.686113 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.686130 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.686146 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.686163 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.686181 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.686199 | orchestrator | 2025-11-11 00:31:16.686217 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-11-11 00:31:16.686233 | orchestrator | 2025-11-11 00:31:16.686252 | orchestrator | TASK [Include hardening role] ************************************************** 2025-11-11 00:31:16.686270 | orchestrator | Tuesday 11 November 2025 00:30:50 +0000 (0:00:01.377) 0:07:22.568 ****** 2025-11-11 00:31:16.686287 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:31:16.686304 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:31:16.686321 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:31:16.686337 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:31:16.686353 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:31:16.686369 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:31:16.686385 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:31:16.686401 | orchestrator | 2025-11-11 00:31:16.686418 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-11-11 00:31:16.686434 | orchestrator | 2025-11-11 00:31:16.686451 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-11-11 00:31:16.686467 | orchestrator | Tuesday 11 November 2025 00:30:51 +0000 (0:00:00.490) 0:07:23.058 ****** 2025-11-11 00:31:16.686483 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.686500 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.686516 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.686533 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.686549 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.686565 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.686581 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.686638 | orchestrator | 2025-11-11 00:31:16.686655 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-11-11 00:31:16.686672 | orchestrator | Tuesday 11 November 2025 00:30:52 +0000 (0:00:01.225) 0:07:24.284 ****** 2025-11-11 00:31:16.686687 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:16.686703 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:16.686718 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:16.686734 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:16.686750 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:16.686764 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:16.686781 | orchestrator | ok: [testbed-manager] 2025-11-11 00:31:16.686797 | orchestrator | 2025-11-11 00:31:16.686813 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-11-11 00:31:16.686828 | orchestrator | Tuesday 11 November 2025 00:30:53 +0000 (0:00:01.410) 0:07:25.694 ****** 2025-11-11 00:31:16.686845 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:31:16.686861 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:31:16.686901 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:31:16.686918 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:31:16.686934 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:31:16.686950 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:31:16.686966 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:31:16.686981 | orchestrator | 2025-11-11 00:31:16.686998 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-11-11 00:31:16.687014 | orchestrator | Tuesday 11 November 2025 00:30:54 +0000 (0:00:00.656) 0:07:26.351 ****** 2025-11-11 00:31:16.687031 | orchestrator | included: osism.services.smartd for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:31:16.687050 | orchestrator | 2025-11-11 00:31:16.687066 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-11-11 00:31:16.687082 | orchestrator | Tuesday 11 November 2025 00:30:55 +0000 (0:00:00.791) 0:07:27.143 ****** 2025-11-11 00:31:16.687100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:31:16.687120 | orchestrator | 2025-11-11 00:31:16.687136 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-11-11 00:31:16.687173 | orchestrator | Tuesday 11 November 2025 00:30:56 +0000 (0:00:00.743) 0:07:27.886 ****** 2025-11-11 00:31:16.687190 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.687206 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.687222 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.687238 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.687253 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.687269 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.687285 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.687301 | orchestrator | 2025-11-11 00:31:16.687346 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-11-11 00:31:16.687363 | orchestrator | Tuesday 11 November 2025 00:31:05 +0000 (0:00:08.984) 0:07:36.871 ****** 2025-11-11 00:31:16.687379 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.687396 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.687411 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.687427 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.687443 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.687458 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.687474 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.687490 | orchestrator | 2025-11-11 00:31:16.687506 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-11-11 00:31:16.687521 | orchestrator | Tuesday 11 November 2025 00:31:05 +0000 (0:00:00.845) 0:07:37.717 ****** 2025-11-11 00:31:16.687537 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.687568 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.687584 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.687599 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.687615 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.687631 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.687646 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.687662 | orchestrator | 2025-11-11 00:31:16.687677 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-11-11 00:31:16.687693 | orchestrator | Tuesday 11 November 2025 00:31:07 +0000 (0:00:01.247) 0:07:38.964 ****** 2025-11-11 00:31:16.687709 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.687726 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.687741 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.687757 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.687773 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.687788 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.687804 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.687820 | orchestrator | 2025-11-11 00:31:16.687837 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-11-11 00:31:16.687853 | orchestrator | Tuesday 11 November 2025 00:31:09 +0000 (0:00:01.843) 0:07:40.808 ****** 2025-11-11 00:31:16.687868 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.687906 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.687922 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.687938 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.687954 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.687971 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.687988 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.688003 | orchestrator | 2025-11-11 00:31:16.688064 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-11-11 00:31:16.688082 | orchestrator | Tuesday 11 November 2025 00:31:10 +0000 (0:00:01.191) 0:07:41.999 ****** 2025-11-11 00:31:16.688098 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.688114 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.688130 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.688146 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.688163 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.688179 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.688195 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.688211 | orchestrator | 2025-11-11 00:31:16.688227 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-11-11 00:31:16.688243 | orchestrator | 2025-11-11 00:31:16.688260 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-11-11 00:31:16.688276 | orchestrator | Tuesday 11 November 2025 00:31:11 +0000 (0:00:01.140) 0:07:43.139 ****** 2025-11-11 00:31:16.688293 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:31:16.688310 | orchestrator | 2025-11-11 00:31:16.688327 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-11 00:31:16.688343 | orchestrator | Tuesday 11 November 2025 00:31:12 +0000 (0:00:01.139) 0:07:44.279 ****** 2025-11-11 00:31:16.688359 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:16.688375 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:16.688392 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:16.688408 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:16.688424 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:16.688440 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:16.688456 | orchestrator | ok: [testbed-manager] 2025-11-11 00:31:16.688473 | orchestrator | 2025-11-11 00:31:16.688490 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-11 00:31:16.688506 | orchestrator | Tuesday 11 November 2025 00:31:13 +0000 (0:00:00.926) 0:07:45.206 ****** 2025-11-11 00:31:16.688522 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:16.688552 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:16.688568 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:16.688585 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:16.688602 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:16.688618 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:16.688634 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:16.688649 | orchestrator | 2025-11-11 00:31:16.688665 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-11-11 00:31:16.688681 | orchestrator | Tuesday 11 November 2025 00:31:14 +0000 (0:00:01.152) 0:07:46.358 ****** 2025-11-11 00:31:16.688699 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-11 00:31:16.688715 | orchestrator | 2025-11-11 00:31:16.688733 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-11 00:31:16.688759 | orchestrator | Tuesday 11 November 2025 00:31:15 +0000 (0:00:01.230) 0:07:47.589 ****** 2025-11-11 00:31:16.688775 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:16.688793 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:16.688811 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:16.688828 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:16.688844 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:16.688860 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:16.688915 | orchestrator | ok: [testbed-manager] 2025-11-11 00:31:16.688932 | orchestrator | 2025-11-11 00:31:16.688964 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-11 00:31:18.393406 | orchestrator | Tuesday 11 November 2025 00:31:16 +0000 (0:00:00.843) 0:07:48.433 ****** 2025-11-11 00:31:18.393521 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:18.393536 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:18.393547 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:18.393558 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:18.393569 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:18.393579 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:18.393590 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:18.393601 | orchestrator | 2025-11-11 00:31:18.393613 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:31:18.393625 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-11-11 00:31:18.393638 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-11 00:31:18.393649 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-11 00:31:18.393660 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-11 00:31:18.393671 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-11 00:31:18.393682 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-11 00:31:18.393692 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-11 00:31:18.393703 | orchestrator | 2025-11-11 00:31:18.393714 | orchestrator | 2025-11-11 00:31:18.393724 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:31:18.393735 | orchestrator | Tuesday 11 November 2025 00:31:17 +0000 (0:00:01.113) 0:07:49.546 ****** 2025-11-11 00:31:18.393746 | orchestrator | =============================================================================== 2025-11-11 00:31:18.393789 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.12s 2025-11-11 00:31:18.393801 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.03s 2025-11-11 00:31:18.393811 | orchestrator | osism.commons.packages : Download required packages -------------------- 29.20s 2025-11-11 00:31:18.393822 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.62s 2025-11-11 00:31:18.393832 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.99s 2025-11-11 00:31:18.393844 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.77s 2025-11-11 00:31:18.393854 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.01s 2025-11-11 00:31:18.393865 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.24s 2025-11-11 00:31:18.393898 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.15s 2025-11-11 00:31:18.393910 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.98s 2025-11-11 00:31:18.393920 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.82s 2025-11-11 00:31:18.393931 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.99s 2025-11-11 00:31:18.393942 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.89s 2025-11-11 00:31:18.393953 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.74s 2025-11-11 00:31:18.393963 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.73s 2025-11-11 00:31:18.393974 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.41s 2025-11-11 00:31:18.393985 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.29s 2025-11-11 00:31:18.393995 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.84s 2025-11-11 00:31:18.394006 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.74s 2025-11-11 00:31:18.394073 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.53s 2025-11-11 00:31:18.710071 | orchestrator | + osism apply fail2ban 2025-11-11 00:31:31.338727 | orchestrator | 2025-11-11 00:31:31 | INFO  | Task 18e436a4-179a-4fc1-aa9e-b8f5f1cdc49e (fail2ban) was prepared for execution. 2025-11-11 00:31:31.338852 | orchestrator | 2025-11-11 00:31:31 | INFO  | It takes a moment until task 18e436a4-179a-4fc1-aa9e-b8f5f1cdc49e (fail2ban) has been started and output is visible here. 2025-11-11 00:31:53.214389 | orchestrator | 2025-11-11 00:31:53.214550 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2025-11-11 00:31:53.214570 | orchestrator | 2025-11-11 00:31:53.214582 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2025-11-11 00:31:53.214594 | orchestrator | Tuesday 11 November 2025 00:31:35 +0000 (0:00:00.251) 0:00:00.251 ****** 2025-11-11 00:31:53.214606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:31:53.214619 | orchestrator | 2025-11-11 00:31:53.214630 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2025-11-11 00:31:53.214641 | orchestrator | Tuesday 11 November 2025 00:31:36 +0000 (0:00:01.123) 0:00:01.375 ****** 2025-11-11 00:31:53.214652 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:53.214664 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:53.214675 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:53.214686 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:53.214696 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:53.214707 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:53.214717 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:53.214728 | orchestrator | 2025-11-11 00:31:53.214739 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2025-11-11 00:31:53.214776 | orchestrator | Tuesday 11 November 2025 00:31:48 +0000 (0:00:11.534) 0:00:12.910 ****** 2025-11-11 00:31:53.214787 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:53.214797 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:53.214808 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:53.214818 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:53.214829 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:53.214839 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:53.214900 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:53.214913 | orchestrator | 2025-11-11 00:31:53.214925 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2025-11-11 00:31:53.214938 | orchestrator | Tuesday 11 November 2025 00:31:49 +0000 (0:00:01.543) 0:00:14.454 ****** 2025-11-11 00:31:53.214950 | orchestrator | ok: [testbed-manager] 2025-11-11 00:31:53.214964 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:31:53.214976 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:31:53.214988 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:31:53.214999 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:31:53.215011 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:31:53.215023 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:31:53.215035 | orchestrator | 2025-11-11 00:31:53.215047 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2025-11-11 00:31:53.215059 | orchestrator | Tuesday 11 November 2025 00:31:51 +0000 (0:00:01.484) 0:00:15.938 ****** 2025-11-11 00:31:53.215072 | orchestrator | changed: [testbed-manager] 2025-11-11 00:31:53.215084 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:31:53.215095 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:31:53.215107 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:31:53.215119 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:31:53.215131 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:31:53.215143 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:31:53.215155 | orchestrator | 2025-11-11 00:31:53.215166 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:31:53.215179 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215192 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215204 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215217 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215229 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215241 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215254 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:31:53.215266 | orchestrator | 2025-11-11 00:31:53.215278 | orchestrator | 2025-11-11 00:31:53.215289 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:31:53.215300 | orchestrator | Tuesday 11 November 2025 00:31:52 +0000 (0:00:01.555) 0:00:17.493 ****** 2025-11-11 00:31:53.215310 | orchestrator | =============================================================================== 2025-11-11 00:31:53.215321 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.53s 2025-11-11 00:31:53.215332 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.56s 2025-11-11 00:31:53.215342 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.54s 2025-11-11 00:31:53.215362 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2025-11-11 00:31:53.215372 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.12s 2025-11-11 00:31:53.516708 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-11 00:31:53.516801 | orchestrator | + osism apply network 2025-11-11 00:32:05.498935 | orchestrator | 2025-11-11 00:32:05 | INFO  | Task 4643454a-db1d-46e9-ad74-049f5607171f (network) was prepared for execution. 2025-11-11 00:32:05.499041 | orchestrator | 2025-11-11 00:32:05 | INFO  | It takes a moment until task 4643454a-db1d-46e9-ad74-049f5607171f (network) has been started and output is visible here. 2025-11-11 00:32:31.689695 | orchestrator | 2025-11-11 00:32:31.689824 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-11-11 00:32:31.689893 | orchestrator | 2025-11-11 00:32:31.689905 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-11-11 00:32:31.689917 | orchestrator | Tuesday 11 November 2025 00:32:09 +0000 (0:00:00.186) 0:00:00.186 ****** 2025-11-11 00:32:31.689929 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.689941 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.689952 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.689963 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.689974 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.689984 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.689995 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.690006 | orchestrator | 2025-11-11 00:32:31.690082 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-11-11 00:32:31.690097 | orchestrator | Tuesday 11 November 2025 00:32:09 +0000 (0:00:00.534) 0:00:00.721 ****** 2025-11-11 00:32:31.690110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:32:31.690125 | orchestrator | 2025-11-11 00:32:31.690137 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-11-11 00:32:31.690149 | orchestrator | Tuesday 11 November 2025 00:32:10 +0000 (0:00:00.865) 0:00:01.586 ****** 2025-11-11 00:32:31.690160 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.690171 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.690182 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.690193 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.690203 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.690215 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.690227 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.690239 | orchestrator | 2025-11-11 00:32:31.690251 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-11-11 00:32:31.690264 | orchestrator | Tuesday 11 November 2025 00:32:12 +0000 (0:00:01.996) 0:00:03.583 ****** 2025-11-11 00:32:31.690277 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.690289 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.690300 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.690313 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.690325 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.690337 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.690349 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.690361 | orchestrator | 2025-11-11 00:32:31.690373 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-11-11 00:32:31.690386 | orchestrator | Tuesday 11 November 2025 00:32:14 +0000 (0:00:01.591) 0:00:05.174 ****** 2025-11-11 00:32:31.690398 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-11-11 00:32:31.690411 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-11-11 00:32:31.690424 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-11-11 00:32:31.690436 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-11-11 00:32:31.690449 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-11-11 00:32:31.690486 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-11-11 00:32:31.690499 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-11-11 00:32:31.690511 | orchestrator | 2025-11-11 00:32:31.690524 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-11-11 00:32:31.690536 | orchestrator | Tuesday 11 November 2025 00:32:14 +0000 (0:00:00.904) 0:00:06.079 ****** 2025-11-11 00:32:31.690548 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-11 00:32:31.690562 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-11 00:32:31.690573 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-11 00:32:31.690584 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-11 00:32:31.690595 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-11 00:32:31.690606 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-11 00:32:31.690634 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-11 00:32:31.690646 | orchestrator | 2025-11-11 00:32:31.690657 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-11-11 00:32:31.690668 | orchestrator | Tuesday 11 November 2025 00:32:18 +0000 (0:00:03.253) 0:00:09.333 ****** 2025-11-11 00:32:31.690679 | orchestrator | changed: [testbed-manager] 2025-11-11 00:32:31.690690 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:32:31.690701 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:32:31.690712 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:32:31.690722 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:32:31.690733 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:32:31.690744 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:32:31.690755 | orchestrator | 2025-11-11 00:32:31.690766 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-11-11 00:32:31.690777 | orchestrator | Tuesday 11 November 2025 00:32:19 +0000 (0:00:01.350) 0:00:10.684 ****** 2025-11-11 00:32:31.690788 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-11 00:32:31.690799 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-11 00:32:31.690810 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-11 00:32:31.690821 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-11 00:32:31.690849 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-11 00:32:31.690860 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-11 00:32:31.690871 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-11 00:32:31.690882 | orchestrator | 2025-11-11 00:32:31.690893 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-11-11 00:32:31.690903 | orchestrator | Tuesday 11 November 2025 00:32:21 +0000 (0:00:01.538) 0:00:12.222 ****** 2025-11-11 00:32:31.690914 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.690925 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.690936 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.690947 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.690958 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.690974 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.690985 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.690996 | orchestrator | 2025-11-11 00:32:31.691007 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-11-11 00:32:31.691037 | orchestrator | Tuesday 11 November 2025 00:32:22 +0000 (0:00:00.988) 0:00:13.211 ****** 2025-11-11 00:32:31.691048 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:32:31.691059 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:31.691070 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:31.691081 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:31.691092 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:31.691102 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:31.691113 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:31.691124 | orchestrator | 2025-11-11 00:32:31.691135 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-11-11 00:32:31.691146 | orchestrator | Tuesday 11 November 2025 00:32:22 +0000 (0:00:00.650) 0:00:13.862 ****** 2025-11-11 00:32:31.691166 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.691177 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.691188 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.691199 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.691210 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.691221 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.691231 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.691242 | orchestrator | 2025-11-11 00:32:31.691253 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-11-11 00:32:31.691264 | orchestrator | Tuesday 11 November 2025 00:32:24 +0000 (0:00:02.092) 0:00:15.954 ****** 2025-11-11 00:32:31.691275 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:31.691286 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:31.691297 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:31.691307 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:31.691318 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:31.691329 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:31.691341 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-11-11 00:32:31.691353 | orchestrator | 2025-11-11 00:32:31.691364 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-11-11 00:32:31.691376 | orchestrator | Tuesday 11 November 2025 00:32:25 +0000 (0:00:00.958) 0:00:16.913 ****** 2025-11-11 00:32:31.691386 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.691397 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:32:31.691408 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:32:31.691418 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:32:31.691429 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:32:31.691440 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:32:31.691450 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:32:31.691461 | orchestrator | 2025-11-11 00:32:31.691472 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-11-11 00:32:31.691483 | orchestrator | Tuesday 11 November 2025 00:32:27 +0000 (0:00:01.677) 0:00:18.590 ****** 2025-11-11 00:32:31.691495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:32:31.691508 | orchestrator | 2025-11-11 00:32:31.691519 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-11 00:32:31.691529 | orchestrator | Tuesday 11 November 2025 00:32:28 +0000 (0:00:01.220) 0:00:19.811 ****** 2025-11-11 00:32:31.691540 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.691551 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.691562 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.691573 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.691583 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.691594 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.691605 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.691616 | orchestrator | 2025-11-11 00:32:31.691626 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-11-11 00:32:31.691638 | orchestrator | Tuesday 11 November 2025 00:32:29 +0000 (0:00:01.088) 0:00:20.900 ****** 2025-11-11 00:32:31.691648 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:31.691659 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:31.691670 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:31.691680 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:31.691691 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:31.691702 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:31.691712 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:31.691723 | orchestrator | 2025-11-11 00:32:31.691733 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-11 00:32:31.691744 | orchestrator | Tuesday 11 November 2025 00:32:30 +0000 (0:00:00.679) 0:00:21.579 ****** 2025-11-11 00:32:31.691755 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691777 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691788 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691799 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691810 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691820 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691847 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691858 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691869 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691880 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691890 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691901 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-11-11 00:32:31.691917 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691928 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-11 00:32:31.691939 | orchestrator | 2025-11-11 00:32:31.691958 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-11-11 00:32:46.890706 | orchestrator | Tuesday 11 November 2025 00:32:31 +0000 (0:00:01.225) 0:00:22.805 ****** 2025-11-11 00:32:46.890880 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:32:46.890899 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:46.890911 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:46.890923 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:46.890934 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:46.890945 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:46.890956 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:46.890967 | orchestrator | 2025-11-11 00:32:46.890980 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-11-11 00:32:46.890991 | orchestrator | Tuesday 11 November 2025 00:32:32 +0000 (0:00:00.618) 0:00:23.424 ****** 2025-11-11 00:32:46.891004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2025-11-11 00:32:46.891018 | orchestrator | 2025-11-11 00:32:46.891030 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-11-11 00:32:46.891041 | orchestrator | Tuesday 11 November 2025 00:32:36 +0000 (0:00:04.483) 0:00:27.908 ****** 2025-11-11 00:32:46.891054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891067 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891092 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891288 | orchestrator | 2025-11-11 00:32:46.891301 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-11-11 00:32:46.891314 | orchestrator | Tuesday 11 November 2025 00:32:41 +0000 (0:00:04.962) 0:00:32.870 ****** 2025-11-11 00:32:46.891326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891351 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891409 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-11 00:32:46.891459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:46.891498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:52.583311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-11 00:32:52.583436 | orchestrator | 2025-11-11 00:32:52.583453 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-11-11 00:32:52.583466 | orchestrator | Tuesday 11 November 2025 00:32:46 +0000 (0:00:05.138) 0:00:38.009 ****** 2025-11-11 00:32:52.583479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:32:52.583491 | orchestrator | 2025-11-11 00:32:52.583502 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-11 00:32:52.583514 | orchestrator | Tuesday 11 November 2025 00:32:47 +0000 (0:00:01.089) 0:00:39.098 ****** 2025-11-11 00:32:52.583554 | orchestrator | ok: [testbed-manager] 2025-11-11 00:32:52.583567 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:32:52.583578 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:32:52.583588 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:32:52.583599 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:32:52.583610 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:32:52.583620 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:32:52.583631 | orchestrator | 2025-11-11 00:32:52.583642 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-11 00:32:52.583653 | orchestrator | Tuesday 11 November 2025 00:32:49 +0000 (0:00:01.055) 0:00:40.153 ****** 2025-11-11 00:32:52.583664 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583676 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583686 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.583697 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.583708 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:32:52.583720 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583731 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583742 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.583752 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.583763 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:52.583774 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583785 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583795 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.583806 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.583845 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:52.583858 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583870 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583882 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.583893 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.583905 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:52.583916 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583928 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583939 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.583950 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.583962 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:52.583974 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.583986 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.583997 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.584009 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.584021 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:52.584033 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-11 00:32:52.584063 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-11 00:32:52.584083 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-11 00:32:52.584095 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-11 00:32:52.584107 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:52.584119 | orchestrator | 2025-11-11 00:32:52.584131 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-11-11 00:32:52.584163 | orchestrator | Tuesday 11 November 2025 00:32:50 +0000 (0:00:01.831) 0:00:41.985 ****** 2025-11-11 00:32:52.584176 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:32:52.584188 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:52.584199 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:52.584209 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:52.584220 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:52.584230 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:52.584241 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:52.584251 | orchestrator | 2025-11-11 00:32:52.584262 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-11-11 00:32:52.584273 | orchestrator | Tuesday 11 November 2025 00:32:51 +0000 (0:00:00.640) 0:00:42.626 ****** 2025-11-11 00:32:52.584283 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:32:52.584294 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:32:52.584304 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:32:52.584315 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:32:52.584325 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:32:52.584336 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:32:52.584346 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:32:52.584357 | orchestrator | 2025-11-11 00:32:52.584367 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:32:52.584379 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-11 00:32:52.584392 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584403 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584414 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584424 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584435 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584445 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-11 00:32:52.584456 | orchestrator | 2025-11-11 00:32:52.584467 | orchestrator | 2025-11-11 00:32:52.584477 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:32:52.584488 | orchestrator | Tuesday 11 November 2025 00:32:52 +0000 (0:00:00.688) 0:00:43.314 ****** 2025-11-11 00:32:52.584499 | orchestrator | =============================================================================== 2025-11-11 00:32:52.584509 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.14s 2025-11-11 00:32:52.584520 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.96s 2025-11-11 00:32:52.584531 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.48s 2025-11-11 00:32:52.584541 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.25s 2025-11-11 00:32:52.584559 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-11-11 00:32:52.584569 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.00s 2025-11-11 00:32:52.584580 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.83s 2025-11-11 00:32:52.584591 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-11-11 00:32:52.584601 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.59s 2025-11-11 00:32:52.584612 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.54s 2025-11-11 00:32:52.584622 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.35s 2025-11-11 00:32:52.584632 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2025-11-11 00:32:52.584643 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2025-11-11 00:32:52.584654 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2025-11-11 00:32:52.584664 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.09s 2025-11-11 00:32:52.584675 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.06s 2025-11-11 00:32:52.584685 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.99s 2025-11-11 00:32:52.584696 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.96s 2025-11-11 00:32:52.584706 | orchestrator | osism.commons.network : Create required directories --------------------- 0.91s 2025-11-11 00:32:52.584723 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.87s 2025-11-11 00:32:52.862156 | orchestrator | + osism apply wireguard 2025-11-11 00:33:04.906059 | orchestrator | 2025-11-11 00:33:04 | INFO  | Task 058cc299-b7eb-45dc-bbc9-725b2dabec58 (wireguard) was prepared for execution. 2025-11-11 00:33:04.906158 | orchestrator | 2025-11-11 00:33:04 | INFO  | It takes a moment until task 058cc299-b7eb-45dc-bbc9-725b2dabec58 (wireguard) has been started and output is visible here. 2025-11-11 00:33:25.778694 | orchestrator | 2025-11-11 00:33:25.778879 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-11-11 00:33:25.778899 | orchestrator | 2025-11-11 00:33:25.778912 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-11-11 00:33:25.778924 | orchestrator | Tuesday 11 November 2025 00:33:08 +0000 (0:00:00.159) 0:00:00.159 ****** 2025-11-11 00:33:25.778935 | orchestrator | ok: [testbed-manager] 2025-11-11 00:33:25.778947 | orchestrator | 2025-11-11 00:33:25.778958 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-11-11 00:33:25.778969 | orchestrator | Tuesday 11 November 2025 00:33:10 +0000 (0:00:01.298) 0:00:01.458 ****** 2025-11-11 00:33:25.778980 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.778991 | orchestrator | 2025-11-11 00:33:25.779002 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-11-11 00:33:25.779013 | orchestrator | Tuesday 11 November 2025 00:33:17 +0000 (0:00:07.045) 0:00:08.504 ****** 2025-11-11 00:33:25.779023 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779034 | orchestrator | 2025-11-11 00:33:25.779044 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-11-11 00:33:25.779055 | orchestrator | Tuesday 11 November 2025 00:33:17 +0000 (0:00:00.542) 0:00:09.047 ****** 2025-11-11 00:33:25.779066 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779076 | orchestrator | 2025-11-11 00:33:25.779087 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-11-11 00:33:25.779098 | orchestrator | Tuesday 11 November 2025 00:33:18 +0000 (0:00:00.445) 0:00:09.493 ****** 2025-11-11 00:33:25.779109 | orchestrator | ok: [testbed-manager] 2025-11-11 00:33:25.779119 | orchestrator | 2025-11-11 00:33:25.779130 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-11-11 00:33:25.779141 | orchestrator | Tuesday 11 November 2025 00:33:18 +0000 (0:00:00.659) 0:00:10.152 ****** 2025-11-11 00:33:25.779183 | orchestrator | ok: [testbed-manager] 2025-11-11 00:33:25.779195 | orchestrator | 2025-11-11 00:33:25.779205 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-11-11 00:33:25.779216 | orchestrator | Tuesday 11 November 2025 00:33:19 +0000 (0:00:00.449) 0:00:10.601 ****** 2025-11-11 00:33:25.779228 | orchestrator | ok: [testbed-manager] 2025-11-11 00:33:25.779239 | orchestrator | 2025-11-11 00:33:25.779249 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-11-11 00:33:25.779260 | orchestrator | Tuesday 11 November 2025 00:33:19 +0000 (0:00:00.455) 0:00:11.057 ****** 2025-11-11 00:33:25.779270 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779281 | orchestrator | 2025-11-11 00:33:25.779292 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-11-11 00:33:25.779302 | orchestrator | Tuesday 11 November 2025 00:33:20 +0000 (0:00:01.140) 0:00:12.197 ****** 2025-11-11 00:33:25.779313 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-11 00:33:25.779324 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779335 | orchestrator | 2025-11-11 00:33:25.779345 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-11-11 00:33:25.779356 | orchestrator | Tuesday 11 November 2025 00:33:21 +0000 (0:00:00.890) 0:00:13.087 ****** 2025-11-11 00:33:25.779366 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779377 | orchestrator | 2025-11-11 00:33:25.779387 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-11-11 00:33:25.779398 | orchestrator | Tuesday 11 November 2025 00:33:23 +0000 (0:00:01.662) 0:00:14.750 ****** 2025-11-11 00:33:25.779408 | orchestrator | changed: [testbed-manager] 2025-11-11 00:33:25.779419 | orchestrator | 2025-11-11 00:33:25.779429 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:33:25.779440 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:33:25.779453 | orchestrator | 2025-11-11 00:33:25.779463 | orchestrator | 2025-11-11 00:33:25.779474 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:33:25.779485 | orchestrator | Tuesday 11 November 2025 00:33:25 +0000 (0:00:01.960) 0:00:16.711 ****** 2025-11-11 00:33:25.779495 | orchestrator | =============================================================================== 2025-11-11 00:33:25.779506 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.05s 2025-11-11 00:33:25.779516 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.96s 2025-11-11 00:33:25.779527 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.66s 2025-11-11 00:33:25.779537 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.30s 2025-11-11 00:33:25.779548 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2025-11-11 00:33:25.779558 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.89s 2025-11-11 00:33:25.779569 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.66s 2025-11-11 00:33:25.779579 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-11-11 00:33:25.779590 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-11-11 00:33:25.779600 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2025-11-11 00:33:25.779611 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-11-11 00:33:26.050767 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-11-11 00:33:26.093098 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-11-11 00:33:26.093144 | orchestrator | Dload Upload Total Spent Left Speed 2025-11-11 00:33:26.170056 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 200 2025-11-11 00:33:26.184203 | orchestrator | + osism apply --environment custom workarounds 2025-11-11 00:33:28.096106 | orchestrator | 2025-11-11 00:33:28 | INFO  | Trying to run play workarounds in environment custom 2025-11-11 00:33:38.219053 | orchestrator | 2025-11-11 00:33:38 | INFO  | Task a3f0e311-f7e0-4e2b-b9e0-2d26834867e5 (workarounds) was prepared for execution. 2025-11-11 00:33:38.219210 | orchestrator | 2025-11-11 00:33:38 | INFO  | It takes a moment until task a3f0e311-f7e0-4e2b-b9e0-2d26834867e5 (workarounds) has been started and output is visible here. 2025-11-11 00:34:02.877953 | orchestrator | 2025-11-11 00:34:02.878150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-11 00:34:02.878171 | orchestrator | 2025-11-11 00:34:02.878183 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-11-11 00:34:02.878195 | orchestrator | Tuesday 11 November 2025 00:33:42 +0000 (0:00:00.125) 0:00:00.125 ****** 2025-11-11 00:34:02.878207 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878223 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878235 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878245 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878257 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878268 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878279 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-11-11 00:34:02.878289 | orchestrator | 2025-11-11 00:34:02.878300 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-11-11 00:34:02.878311 | orchestrator | 2025-11-11 00:34:02.878322 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-11 00:34:02.878333 | orchestrator | Tuesday 11 November 2025 00:33:43 +0000 (0:00:00.798) 0:00:00.924 ****** 2025-11-11 00:34:02.878344 | orchestrator | ok: [testbed-manager] 2025-11-11 00:34:02.878356 | orchestrator | 2025-11-11 00:34:02.878367 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-11-11 00:34:02.878378 | orchestrator | 2025-11-11 00:34:02.878389 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-11 00:34:02.878400 | orchestrator | Tuesday 11 November 2025 00:33:45 +0000 (0:00:02.296) 0:00:03.221 ****** 2025-11-11 00:34:02.878411 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:34:02.878422 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:34:02.878433 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:34:02.878443 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:34:02.878454 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:34:02.878465 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:34:02.878475 | orchestrator | 2025-11-11 00:34:02.878487 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-11-11 00:34:02.878500 | orchestrator | 2025-11-11 00:34:02.878512 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-11-11 00:34:02.878525 | orchestrator | Tuesday 11 November 2025 00:33:47 +0000 (0:00:01.823) 0:00:05.044 ****** 2025-11-11 00:34:02.878538 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878551 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878564 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878576 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878589 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878637 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-11 00:34:02.878650 | orchestrator | 2025-11-11 00:34:02.878663 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-11-11 00:34:02.878675 | orchestrator | Tuesday 11 November 2025 00:33:48 +0000 (0:00:01.433) 0:00:06.478 ****** 2025-11-11 00:34:02.878688 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:34:02.878701 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:34:02.878713 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:34:02.878725 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:34:02.878738 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:34:02.878750 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:34:02.878799 | orchestrator | 2025-11-11 00:34:02.878813 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-11-11 00:34:02.878826 | orchestrator | Tuesday 11 November 2025 00:33:52 +0000 (0:00:03.761) 0:00:10.239 ****** 2025-11-11 00:34:02.878838 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:34:02.878851 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:34:02.878861 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:34:02.878872 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:34:02.878882 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:34:02.878893 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:34:02.878903 | orchestrator | 2025-11-11 00:34:02.878932 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-11-11 00:34:02.878944 | orchestrator | 2025-11-11 00:34:02.878955 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-11-11 00:34:02.878966 | orchestrator | Tuesday 11 November 2025 00:33:53 +0000 (0:00:00.669) 0:00:10.909 ****** 2025-11-11 00:34:02.878976 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:34:02.878987 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:34:02.878998 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:34:02.879008 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:34:02.879019 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:34:02.879029 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:34:02.879040 | orchestrator | changed: [testbed-manager] 2025-11-11 00:34:02.879050 | orchestrator | 2025-11-11 00:34:02.879061 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-11-11 00:34:02.879072 | orchestrator | Tuesday 11 November 2025 00:33:54 +0000 (0:00:01.524) 0:00:12.433 ****** 2025-11-11 00:34:02.879083 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:34:02.879093 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:34:02.879104 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:34:02.879115 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:34:02.879125 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:34:02.879136 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:34:02.879167 | orchestrator | changed: [testbed-manager] 2025-11-11 00:34:02.879178 | orchestrator | 2025-11-11 00:34:02.879189 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-11-11 00:34:02.879200 | orchestrator | Tuesday 11 November 2025 00:33:56 +0000 (0:00:01.503) 0:00:13.937 ****** 2025-11-11 00:34:02.879211 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:34:02.879221 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:34:02.879232 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:34:02.879243 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:34:02.879253 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:34:02.879264 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:34:02.879275 | orchestrator | ok: [testbed-manager] 2025-11-11 00:34:02.879285 | orchestrator | 2025-11-11 00:34:02.879296 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-11-11 00:34:02.879307 | orchestrator | Tuesday 11 November 2025 00:33:57 +0000 (0:00:01.511) 0:00:15.448 ****** 2025-11-11 00:34:02.879318 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:34:02.879329 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:34:02.879340 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:34:02.879360 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:34:02.879371 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:34:02.879382 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:34:02.879392 | orchestrator | changed: [testbed-manager] 2025-11-11 00:34:02.879403 | orchestrator | 2025-11-11 00:34:02.879414 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-11-11 00:34:02.879424 | orchestrator | Tuesday 11 November 2025 00:33:59 +0000 (0:00:01.735) 0:00:17.184 ****** 2025-11-11 00:34:02.879435 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:34:02.879446 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:34:02.879456 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:34:02.879467 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:34:02.879478 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:34:02.879488 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:34:02.879499 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:34:02.879510 | orchestrator | 2025-11-11 00:34:02.879520 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-11-11 00:34:02.879531 | orchestrator | 2025-11-11 00:34:02.879542 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-11-11 00:34:02.879552 | orchestrator | Tuesday 11 November 2025 00:33:59 +0000 (0:00:00.601) 0:00:17.785 ****** 2025-11-11 00:34:02.879563 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:34:02.879574 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:34:02.879584 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:34:02.879595 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:34:02.879606 | orchestrator | ok: [testbed-manager] 2025-11-11 00:34:02.879616 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:34:02.879627 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:34:02.879637 | orchestrator | 2025-11-11 00:34:02.879648 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:34:02.879660 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:34:02.879672 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879684 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879695 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879706 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879716 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879727 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:02.879738 | orchestrator | 2025-11-11 00:34:02.879749 | orchestrator | 2025-11-11 00:34:02.879759 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:34:02.879792 | orchestrator | Tuesday 11 November 2025 00:34:02 +0000 (0:00:02.938) 0:00:20.724 ****** 2025-11-11 00:34:02.879803 | orchestrator | =============================================================================== 2025-11-11 00:34:02.879820 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-11-11 00:34:02.879831 | orchestrator | Install python3-docker -------------------------------------------------- 2.94s 2025-11-11 00:34:02.879842 | orchestrator | Apply netplan configuration --------------------------------------------- 2.30s 2025-11-11 00:34:02.879852 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-11-11 00:34:02.879870 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-11-11 00:34:02.879881 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2025-11-11 00:34:02.879892 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-11-11 00:34:02.879902 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2025-11-11 00:34:02.879913 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2025-11-11 00:34:02.879923 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-11-11 00:34:02.879934 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-11-11 00:34:02.879952 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-11-11 00:34:03.447949 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-11-11 00:34:15.421066 | orchestrator | 2025-11-11 00:34:15 | INFO  | Task d5f8c31f-07e2-4df2-a3dd-d015dd730d6a (reboot) was prepared for execution. 2025-11-11 00:34:15.421188 | orchestrator | 2025-11-11 00:34:15 | INFO  | It takes a moment until task d5f8c31f-07e2-4df2-a3dd-d015dd730d6a (reboot) has been started and output is visible here. 2025-11-11 00:34:25.299542 | orchestrator | 2025-11-11 00:34:25.299679 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.299696 | orchestrator | 2025-11-11 00:34:25.299709 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.299721 | orchestrator | Tuesday 11 November 2025 00:34:19 +0000 (0:00:00.195) 0:00:00.195 ****** 2025-11-11 00:34:25.299732 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:34:25.299799 | orchestrator | 2025-11-11 00:34:25.299813 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.299824 | orchestrator | Tuesday 11 November 2025 00:34:19 +0000 (0:00:00.099) 0:00:00.295 ****** 2025-11-11 00:34:25.299836 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:34:25.299847 | orchestrator | 2025-11-11 00:34:25.299857 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.299869 | orchestrator | Tuesday 11 November 2025 00:34:20 +0000 (0:00:00.849) 0:00:01.145 ****** 2025-11-11 00:34:25.299879 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:34:25.299890 | orchestrator | 2025-11-11 00:34:25.299901 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.299912 | orchestrator | 2025-11-11 00:34:25.299922 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.299933 | orchestrator | Tuesday 11 November 2025 00:34:20 +0000 (0:00:00.108) 0:00:01.253 ****** 2025-11-11 00:34:25.299944 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:34:25.299954 | orchestrator | 2025-11-11 00:34:25.299965 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.299976 | orchestrator | Tuesday 11 November 2025 00:34:20 +0000 (0:00:00.110) 0:00:01.364 ****** 2025-11-11 00:34:25.299986 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:34:25.299997 | orchestrator | 2025-11-11 00:34:25.300008 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.300019 | orchestrator | Tuesday 11 November 2025 00:34:21 +0000 (0:00:00.644) 0:00:02.009 ****** 2025-11-11 00:34:25.300029 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:34:25.300040 | orchestrator | 2025-11-11 00:34:25.300051 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.300061 | orchestrator | 2025-11-11 00:34:25.300072 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.300083 | orchestrator | Tuesday 11 November 2025 00:34:21 +0000 (0:00:00.108) 0:00:02.117 ****** 2025-11-11 00:34:25.300093 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:34:25.300104 | orchestrator | 2025-11-11 00:34:25.300115 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.300156 | orchestrator | Tuesday 11 November 2025 00:34:21 +0000 (0:00:00.191) 0:00:02.309 ****** 2025-11-11 00:34:25.300167 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:34:25.300178 | orchestrator | 2025-11-11 00:34:25.300189 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.300199 | orchestrator | Tuesday 11 November 2025 00:34:22 +0000 (0:00:00.651) 0:00:02.961 ****** 2025-11-11 00:34:25.300210 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:34:25.300220 | orchestrator | 2025-11-11 00:34:25.300231 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.300241 | orchestrator | 2025-11-11 00:34:25.300252 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.300262 | orchestrator | Tuesday 11 November 2025 00:34:22 +0000 (0:00:00.126) 0:00:03.087 ****** 2025-11-11 00:34:25.300273 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:34:25.300284 | orchestrator | 2025-11-11 00:34:25.300294 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.300305 | orchestrator | Tuesday 11 November 2025 00:34:22 +0000 (0:00:00.123) 0:00:03.210 ****** 2025-11-11 00:34:25.300315 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:34:25.300326 | orchestrator | 2025-11-11 00:34:25.300337 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.300347 | orchestrator | Tuesday 11 November 2025 00:34:23 +0000 (0:00:00.663) 0:00:03.874 ****** 2025-11-11 00:34:25.300358 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:34:25.300369 | orchestrator | 2025-11-11 00:34:25.300380 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.300409 | orchestrator | 2025-11-11 00:34:25.300420 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.300431 | orchestrator | Tuesday 11 November 2025 00:34:23 +0000 (0:00:00.111) 0:00:03.986 ****** 2025-11-11 00:34:25.300442 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:34:25.300453 | orchestrator | 2025-11-11 00:34:25.300464 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.300474 | orchestrator | Tuesday 11 November 2025 00:34:23 +0000 (0:00:00.099) 0:00:04.085 ****** 2025-11-11 00:34:25.300485 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:34:25.300496 | orchestrator | 2025-11-11 00:34:25.300506 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.300517 | orchestrator | Tuesday 11 November 2025 00:34:24 +0000 (0:00:00.651) 0:00:04.736 ****** 2025-11-11 00:34:25.300528 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:34:25.300539 | orchestrator | 2025-11-11 00:34:25.300549 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-11 00:34:25.300560 | orchestrator | 2025-11-11 00:34:25.300571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-11 00:34:25.300581 | orchestrator | Tuesday 11 November 2025 00:34:24 +0000 (0:00:00.107) 0:00:04.844 ****** 2025-11-11 00:34:25.300592 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:34:25.300603 | orchestrator | 2025-11-11 00:34:25.300613 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-11 00:34:25.300624 | orchestrator | Tuesday 11 November 2025 00:34:24 +0000 (0:00:00.118) 0:00:04.963 ****** 2025-11-11 00:34:25.300635 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:34:25.300646 | orchestrator | 2025-11-11 00:34:25.300656 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-11 00:34:25.300667 | orchestrator | Tuesday 11 November 2025 00:34:24 +0000 (0:00:00.737) 0:00:05.701 ****** 2025-11-11 00:34:25.300697 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:34:25.300708 | orchestrator | 2025-11-11 00:34:25.300719 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:34:25.300731 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300772 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300785 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300795 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300806 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300817 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:34:25.300827 | orchestrator | 2025-11-11 00:34:25.300838 | orchestrator | 2025-11-11 00:34:25.300849 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:34:25.300860 | orchestrator | Tuesday 11 November 2025 00:34:25 +0000 (0:00:00.036) 0:00:05.738 ****** 2025-11-11 00:34:25.300871 | orchestrator | =============================================================================== 2025-11-11 00:34:25.300881 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.20s 2025-11-11 00:34:25.300892 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2025-11-11 00:34:25.300902 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-11-11 00:34:25.568276 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-11-11 00:34:37.551462 | orchestrator | 2025-11-11 00:34:37 | INFO  | Task ad537765-797e-432f-8bd6-8c651f00eaf8 (wait-for-connection) was prepared for execution. 2025-11-11 00:34:37.551597 | orchestrator | 2025-11-11 00:34:37 | INFO  | It takes a moment until task ad537765-797e-432f-8bd6-8c651f00eaf8 (wait-for-connection) has been started and output is visible here. 2025-11-11 00:34:53.442226 | orchestrator | 2025-11-11 00:34:53.442346 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-11-11 00:34:53.442360 | orchestrator | 2025-11-11 00:34:53.442371 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-11-11 00:34:53.442381 | orchestrator | Tuesday 11 November 2025 00:34:41 +0000 (0:00:00.225) 0:00:00.225 ****** 2025-11-11 00:34:53.442392 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:34:53.442404 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:34:53.442413 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:34:53.442423 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:34:53.442433 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:34:53.442443 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:34:53.442452 | orchestrator | 2025-11-11 00:34:53.442462 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:34:53.442473 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442485 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442515 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442525 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442535 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442544 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:34:53.442579 | orchestrator | 2025-11-11 00:34:53.442590 | orchestrator | 2025-11-11 00:34:53.442599 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:34:53.442609 | orchestrator | Tuesday 11 November 2025 00:34:53 +0000 (0:00:11.466) 0:00:11.691 ****** 2025-11-11 00:34:53.442619 | orchestrator | =============================================================================== 2025-11-11 00:34:53.442628 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.47s 2025-11-11 00:34:53.713405 | orchestrator | + osism apply hddtemp 2025-11-11 00:35:05.868614 | orchestrator | 2025-11-11 00:35:05 | INFO  | Task 90fce3ba-8fc6-408a-89ef-53bf66c2e4c7 (hddtemp) was prepared for execution. 2025-11-11 00:35:05.868804 | orchestrator | 2025-11-11 00:35:05 | INFO  | It takes a moment until task 90fce3ba-8fc6-408a-89ef-53bf66c2e4c7 (hddtemp) has been started and output is visible here. 2025-11-11 00:35:33.514596 | orchestrator | 2025-11-11 00:35:33.514782 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-11-11 00:35:33.514800 | orchestrator | 2025-11-11 00:35:33.514812 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-11-11 00:35:33.514824 | orchestrator | Tuesday 11 November 2025 00:35:09 +0000 (0:00:00.189) 0:00:00.189 ****** 2025-11-11 00:35:33.514836 | orchestrator | ok: [testbed-manager] 2025-11-11 00:35:33.514848 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:35:33.514859 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:35:33.514870 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:35:33.514882 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:35:33.514893 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:35:33.514903 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:35:33.514914 | orchestrator | 2025-11-11 00:35:33.514925 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-11-11 00:35:33.514936 | orchestrator | Tuesday 11 November 2025 00:35:10 +0000 (0:00:00.536) 0:00:00.725 ****** 2025-11-11 00:35:33.514948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:35:33.514962 | orchestrator | 2025-11-11 00:35:33.514973 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-11-11 00:35:33.514984 | orchestrator | Tuesday 11 November 2025 00:35:11 +0000 (0:00:01.019) 0:00:01.744 ****** 2025-11-11 00:35:33.514995 | orchestrator | ok: [testbed-manager] 2025-11-11 00:35:33.515005 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:35:33.515016 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:35:33.515026 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:35:33.515037 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:35:33.515048 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:35:33.515058 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:35:33.515069 | orchestrator | 2025-11-11 00:35:33.515079 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-11-11 00:35:33.515090 | orchestrator | Tuesday 11 November 2025 00:35:13 +0000 (0:00:01.991) 0:00:03.736 ****** 2025-11-11 00:35:33.515101 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:35:33.515113 | orchestrator | changed: [testbed-manager] 2025-11-11 00:35:33.515125 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:35:33.515137 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:35:33.515149 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:35:33.515160 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:35:33.515172 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:35:33.515184 | orchestrator | 2025-11-11 00:35:33.515195 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-11-11 00:35:33.515208 | orchestrator | Tuesday 11 November 2025 00:35:14 +0000 (0:00:01.115) 0:00:04.852 ****** 2025-11-11 00:35:33.515220 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:35:33.515231 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:35:33.515243 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:35:33.515281 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:35:33.515294 | orchestrator | ok: [testbed-manager] 2025-11-11 00:35:33.515305 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:35:33.515317 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:35:33.515329 | orchestrator | 2025-11-11 00:35:33.515341 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-11-11 00:35:33.515353 | orchestrator | Tuesday 11 November 2025 00:35:15 +0000 (0:00:01.088) 0:00:05.940 ****** 2025-11-11 00:35:33.515365 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:35:33.515377 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:35:33.515389 | orchestrator | changed: [testbed-manager] 2025-11-11 00:35:33.515400 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:35:33.515412 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:35:33.515424 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:35:33.515435 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:35:33.515447 | orchestrator | 2025-11-11 00:35:33.515460 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-11-11 00:35:33.515472 | orchestrator | Tuesday 11 November 2025 00:35:16 +0000 (0:00:00.846) 0:00:06.787 ****** 2025-11-11 00:35:33.515482 | orchestrator | changed: [testbed-manager] 2025-11-11 00:35:33.515493 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:35:33.515504 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:35:33.515514 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:35:33.515525 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:35:33.515535 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:35:33.515563 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:35:33.515574 | orchestrator | 2025-11-11 00:35:33.515584 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-11-11 00:35:33.515595 | orchestrator | Tuesday 11 November 2025 00:35:29 +0000 (0:00:12.861) 0:00:19.648 ****** 2025-11-11 00:35:33.515606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:35:33.515617 | orchestrator | 2025-11-11 00:35:33.515628 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-11-11 00:35:33.515639 | orchestrator | Tuesday 11 November 2025 00:35:30 +0000 (0:00:01.159) 0:00:20.808 ****** 2025-11-11 00:35:33.515649 | orchestrator | changed: [testbed-manager] 2025-11-11 00:35:33.515660 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:35:33.515670 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:35:33.515681 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:35:33.515691 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:35:33.515719 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:35:33.515730 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:35:33.515740 | orchestrator | 2025-11-11 00:35:33.515751 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:35:33.515762 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:35:33.515793 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515806 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515817 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515828 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515839 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515860 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:35:33.515871 | orchestrator | 2025-11-11 00:35:33.515882 | orchestrator | 2025-11-11 00:35:33.515892 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:35:33.515903 | orchestrator | Tuesday 11 November 2025 00:35:33 +0000 (0:00:02.634) 0:00:23.442 ****** 2025-11-11 00:35:33.515914 | orchestrator | =============================================================================== 2025-11-11 00:35:33.515925 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.86s 2025-11-11 00:35:33.515936 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.63s 2025-11-11 00:35:33.515947 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.99s 2025-11-11 00:35:33.515958 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2025-11-11 00:35:33.515969 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-11-11 00:35:33.515980 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2025-11-11 00:35:33.515990 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2025-11-11 00:35:33.516001 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2025-11-11 00:35:33.516012 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.54s 2025-11-11 00:35:33.768621 | orchestrator | ++ semver latest 7.1.1 2025-11-11 00:35:33.815343 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-11 00:35:33.815407 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-11 00:35:33.815421 | orchestrator | + sudo systemctl restart manager.service 2025-11-11 00:36:36.367799 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-11 00:36:36.367883 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-11 00:36:36.367897 | orchestrator | + local max_attempts=60 2025-11-11 00:36:36.367910 | orchestrator | + local name=ceph-ansible 2025-11-11 00:36:36.367921 | orchestrator | + local attempt_num=1 2025-11-11 00:36:36.367932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:36:36.402700 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:36:36.402782 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:36:36.402798 | orchestrator | + sleep 5 2025-11-11 00:36:41.408168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:36:41.444949 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:36:41.445008 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:36:41.445022 | orchestrator | + sleep 5 2025-11-11 00:36:46.447114 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:36:46.465587 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:36:46.465630 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:36:46.465643 | orchestrator | + sleep 5 2025-11-11 00:36:51.471267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:36:51.504121 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:36:51.504172 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:36:51.504186 | orchestrator | + sleep 5 2025-11-11 00:36:56.509729 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:36:56.531214 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:36:56.531270 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:36:56.531283 | orchestrator | + sleep 5 2025-11-11 00:37:01.535787 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:01.574064 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:01.574120 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:01.574133 | orchestrator | + sleep 5 2025-11-11 00:37:06.579022 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:06.619116 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:06.619165 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:06.619209 | orchestrator | + sleep 5 2025-11-11 00:37:11.627073 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:11.665160 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:11.665212 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:11.665225 | orchestrator | + sleep 5 2025-11-11 00:37:16.669728 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:16.686228 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:16.686275 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:16.686287 | orchestrator | + sleep 5 2025-11-11 00:37:21.689999 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:21.728598 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:21.728676 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:21.728691 | orchestrator | + sleep 5 2025-11-11 00:37:26.734246 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:26.773740 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:26.773788 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:26.773800 | orchestrator | + sleep 5 2025-11-11 00:37:31.778993 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:31.813265 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:31.813303 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:31.813316 | orchestrator | + sleep 5 2025-11-11 00:37:36.818393 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:36.861573 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:36.861675 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-11 00:37:36.861691 | orchestrator | + sleep 5 2025-11-11 00:37:41.866099 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-11 00:37:41.906182 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:41.906228 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-11 00:37:41.906241 | orchestrator | + local max_attempts=60 2025-11-11 00:37:41.906535 | orchestrator | + local name=kolla-ansible 2025-11-11 00:37:41.906851 | orchestrator | + local attempt_num=1 2025-11-11 00:37:41.907755 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-11 00:37:41.949125 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:41.949168 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-11 00:37:41.949181 | orchestrator | + local max_attempts=60 2025-11-11 00:37:41.949193 | orchestrator | + local name=osism-ansible 2025-11-11 00:37:41.949204 | orchestrator | + local attempt_num=1 2025-11-11 00:37:41.949746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-11 00:37:41.988582 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-11 00:37:41.988609 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-11 00:37:41.988621 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-11 00:37:42.155396 | orchestrator | ARA in ceph-ansible already disabled. 2025-11-11 00:37:42.340395 | orchestrator | ARA in kolla-ansible already disabled. 2025-11-11 00:37:42.675756 | orchestrator | + osism apply gather-facts 2025-11-11 00:38:02.173758 | orchestrator | 2025-11-11 00:38:02 | INFO  | Task 6093bbcb-1295-4a34-a4b3-41e79eb1c3e9 (gather-facts) was prepared for execution. 2025-11-11 00:38:02.173844 | orchestrator | 2025-11-11 00:38:02 | INFO  | It takes a moment until task 6093bbcb-1295-4a34-a4b3-41e79eb1c3e9 (gather-facts) has been started and output is visible here. 2025-11-11 00:38:15.001573 | orchestrator | 2025-11-11 00:38:15.001747 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-11 00:38:15.001765 | orchestrator | 2025-11-11 00:38:15.001777 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:38:15.001789 | orchestrator | Tuesday 11 November 2025 00:38:05 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-11-11 00:38:15.001801 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:38:15.001814 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:38:15.001825 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:38:15.001837 | orchestrator | ok: [testbed-manager] 2025-11-11 00:38:15.001848 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:38:15.001864 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:38:15.001923 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:38:15.001943 | orchestrator | 2025-11-11 00:38:15.001961 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-11 00:38:15.001979 | orchestrator | 2025-11-11 00:38:15.001999 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-11 00:38:15.002012 | orchestrator | Tuesday 11 November 2025 00:38:14 +0000 (0:00:08.293) 0:00:08.494 ****** 2025-11-11 00:38:15.002085 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:38:15.002097 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:38:15.002111 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:38:15.002123 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:38:15.002135 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:38:15.002147 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:38:15.002159 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:38:15.002171 | orchestrator | 2025-11-11 00:38:15.002183 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:38:15.002196 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002210 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002222 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002234 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002246 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002259 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002271 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:38:15.002283 | orchestrator | 2025-11-11 00:38:15.002295 | orchestrator | 2025-11-11 00:38:15.002307 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:38:15.002320 | orchestrator | Tuesday 11 November 2025 00:38:14 +0000 (0:00:00.504) 0:00:08.999 ****** 2025-11-11 00:38:15.002332 | orchestrator | =============================================================================== 2025-11-11 00:38:15.002344 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.29s 2025-11-11 00:38:15.002357 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-11-11 00:38:15.335456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-11-11 00:38:15.351734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-11-11 00:38:15.364178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-11-11 00:38:15.374519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-11-11 00:38:15.384293 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-11-11 00:38:15.394275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-11-11 00:38:15.404126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-11-11 00:38:15.415231 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-11-11 00:38:15.435605 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-11-11 00:38:15.451208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-11-11 00:38:15.469400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-11-11 00:38:15.484190 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-11-11 00:38:15.501907 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-11-11 00:38:15.520786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-11-11 00:38:15.535269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-11-11 00:38:15.555366 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-11-11 00:38:15.569559 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-11-11 00:38:15.586768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-11-11 00:38:15.605082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-11-11 00:38:15.617362 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-11-11 00:38:15.633442 | orchestrator | + [[ false == \t\r\u\e ]] 2025-11-11 00:38:16.120027 | orchestrator | ok: Runtime: 0:24:16.906521 2025-11-11 00:38:16.208978 | 2025-11-11 00:38:16.209103 | TASK [Deploy services] 2025-11-11 00:38:16.742039 | orchestrator | skipping: Conditional result was False 2025-11-11 00:38:16.759005 | 2025-11-11 00:38:16.759156 | TASK [Deploy in a nutshell] 2025-11-11 00:38:17.442338 | orchestrator | + set -e 2025-11-11 00:38:17.442521 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-11 00:38:17.442534 | orchestrator | ++ export INTERACTIVE=false 2025-11-11 00:38:17.442544 | orchestrator | ++ INTERACTIVE=false 2025-11-11 00:38:17.442549 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-11 00:38:17.442554 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-11 00:38:17.442559 | orchestrator | + source /opt/manager-vars.sh 2025-11-11 00:38:17.442583 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-11 00:38:17.442595 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-11 00:38:17.442600 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-11 00:38:17.442607 | orchestrator | ++ CEPH_VERSION=reef 2025-11-11 00:38:17.442611 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-11 00:38:17.442619 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-11 00:38:17.442623 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-11 00:38:17.442631 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-11 00:38:17.442661 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-11 00:38:17.442668 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-11 00:38:17.442672 | orchestrator | ++ export ARA=false 2025-11-11 00:38:17.442676 | orchestrator | ++ ARA=false 2025-11-11 00:38:17.442689 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-11 00:38:17.442694 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-11 00:38:17.442698 | orchestrator | ++ export TEMPEST=true 2025-11-11 00:38:17.442701 | orchestrator | ++ TEMPEST=true 2025-11-11 00:38:17.442705 | orchestrator | ++ export IS_ZUUL=true 2025-11-11 00:38:17.442709 | orchestrator | ++ IS_ZUUL=true 2025-11-11 00:38:17.442713 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:38:17.442717 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.227 2025-11-11 00:38:17.442721 | orchestrator | ++ export EXTERNAL_API=false 2025-11-11 00:38:17.442725 | orchestrator | ++ EXTERNAL_API=false 2025-11-11 00:38:17.442728 | orchestrator | 2025-11-11 00:38:17.442732 | orchestrator | # PULL IMAGES 2025-11-11 00:38:17.442736 | orchestrator | 2025-11-11 00:38:17.442740 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-11 00:38:17.442744 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-11 00:38:17.442748 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-11 00:38:17.442752 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-11 00:38:17.442755 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-11 00:38:17.442764 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-11 00:38:17.442768 | orchestrator | + echo 2025-11-11 00:38:17.442772 | orchestrator | + echo '# PULL IMAGES' 2025-11-11 00:38:17.442775 | orchestrator | + echo 2025-11-11 00:38:17.443598 | orchestrator | ++ semver latest 7.0.0 2025-11-11 00:38:17.487611 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-11 00:38:17.487626 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-11 00:38:17.487632 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-11-11 00:38:19.190089 | orchestrator | 2025-11-11 00:38:19 | INFO  | Trying to run play pull-images in environment custom 2025-11-11 00:38:29.287389 | orchestrator | 2025-11-11 00:38:29 | INFO  | Task 771bfd9d-13d1-4c9f-9de3-462500d2351a (pull-images) was prepared for execution. 2025-11-11 00:38:29.287530 | orchestrator | 2025-11-11 00:38:29 | INFO  | Task 771bfd9d-13d1-4c9f-9de3-462500d2351a is running in background. No more output. Check ARA for logs. 2025-11-11 00:38:31.512768 | orchestrator | 2025-11-11 00:38:31 | INFO  | Trying to run play wipe-partitions in environment custom 2025-11-11 00:38:41.741616 | orchestrator | 2025-11-11 00:38:41 | INFO  | Task c8123ce5-8b54-4e78-9dd0-d2091326ae46 (wipe-partitions) was prepared for execution. 2025-11-11 00:38:41.741832 | orchestrator | 2025-11-11 00:38:41 | INFO  | It takes a moment until task c8123ce5-8b54-4e78-9dd0-d2091326ae46 (wipe-partitions) has been started and output is visible here. 2025-11-11 00:38:54.419033 | orchestrator | 2025-11-11 00:38:54.419170 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-11-11 00:38:54.419188 | orchestrator | 2025-11-11 00:38:54.419200 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-11-11 00:38:54.419225 | orchestrator | Tuesday 11 November 2025 00:38:45 +0000 (0:00:00.118) 0:00:00.118 ****** 2025-11-11 00:38:54.419239 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:38:54.419251 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:38:54.419262 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:38:54.419274 | orchestrator | 2025-11-11 00:38:54.419285 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-11-11 00:38:54.419330 | orchestrator | Tuesday 11 November 2025 00:38:46 +0000 (0:00:00.549) 0:00:00.667 ****** 2025-11-11 00:38:54.419342 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:38:54.419353 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:38:54.419370 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:38:54.419380 | orchestrator | 2025-11-11 00:38:54.419391 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-11-11 00:38:54.419402 | orchestrator | Tuesday 11 November 2025 00:38:46 +0000 (0:00:00.317) 0:00:00.985 ****** 2025-11-11 00:38:54.419413 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:38:54.419424 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:38:54.419435 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:38:54.419445 | orchestrator | 2025-11-11 00:38:54.419456 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-11-11 00:38:54.419467 | orchestrator | Tuesday 11 November 2025 00:38:47 +0000 (0:00:00.523) 0:00:01.508 ****** 2025-11-11 00:38:54.419477 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:38:54.419488 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:38:54.419498 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:38:54.419509 | orchestrator | 2025-11-11 00:38:54.419519 | orchestrator | TASK [Check device availability] *********************************************** 2025-11-11 00:38:54.419530 | orchestrator | Tuesday 11 November 2025 00:38:47 +0000 (0:00:00.225) 0:00:01.734 ****** 2025-11-11 00:38:54.419541 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-11 00:38:54.419556 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-11 00:38:54.419567 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-11 00:38:54.419578 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-11 00:38:54.419588 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-11 00:38:54.419598 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-11 00:38:54.419609 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-11 00:38:54.419619 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-11 00:38:54.419630 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-11 00:38:54.419641 | orchestrator | 2025-11-11 00:38:54.419704 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-11-11 00:38:54.419716 | orchestrator | Tuesday 11 November 2025 00:38:49 +0000 (0:00:01.989) 0:00:03.724 ****** 2025-11-11 00:38:54.419728 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-11-11 00:38:54.419739 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-11-11 00:38:54.419750 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-11-11 00:38:54.419760 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-11-11 00:38:54.419771 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-11-11 00:38:54.419781 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-11-11 00:38:54.419792 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-11-11 00:38:54.419803 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-11-11 00:38:54.419813 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-11-11 00:38:54.419824 | orchestrator | 2025-11-11 00:38:54.419834 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-11-11 00:38:54.419845 | orchestrator | Tuesday 11 November 2025 00:38:50 +0000 (0:00:01.539) 0:00:05.264 ****** 2025-11-11 00:38:54.419856 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-11 00:38:54.419866 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-11 00:38:54.419877 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-11 00:38:54.419888 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-11 00:38:54.419898 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-11 00:38:54.419919 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-11 00:38:54.419930 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-11 00:38:54.419949 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-11 00:38:54.419960 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-11 00:38:54.419971 | orchestrator | 2025-11-11 00:38:54.419982 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-11-11 00:38:54.419992 | orchestrator | Tuesday 11 November 2025 00:38:52 +0000 (0:00:02.085) 0:00:07.349 ****** 2025-11-11 00:38:54.420003 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:38:54.420014 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:38:54.420025 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:38:54.420035 | orchestrator | 2025-11-11 00:38:54.420046 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-11-11 00:38:54.420057 | orchestrator | Tuesday 11 November 2025 00:38:53 +0000 (0:00:00.592) 0:00:07.942 ****** 2025-11-11 00:38:54.420068 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:38:54.420078 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:38:54.420089 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:38:54.420100 | orchestrator | 2025-11-11 00:38:54.420110 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:38:54.420124 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:38:54.420136 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:38:54.420167 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:38:54.420178 | orchestrator | 2025-11-11 00:38:54.420189 | orchestrator | 2025-11-11 00:38:54.420200 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:38:54.420211 | orchestrator | Tuesday 11 November 2025 00:38:54 +0000 (0:00:00.635) 0:00:08.577 ****** 2025-11-11 00:38:54.420222 | orchestrator | =============================================================================== 2025-11-11 00:38:54.420233 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2025-11-11 00:38:54.420243 | orchestrator | Check device availability ----------------------------------------------- 1.99s 2025-11-11 00:38:54.420254 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.54s 2025-11-11 00:38:54.420265 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-11-11 00:38:54.420275 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-11-11 00:38:54.420286 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-11-11 00:38:54.420297 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2025-11-11 00:38:54.420308 | orchestrator | Remove all rook related logical devices --------------------------------- 0.32s 2025-11-11 00:38:54.420319 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-11-11 00:39:06.664957 | orchestrator | 2025-11-11 00:39:06 | INFO  | Task dbba48e4-d9b3-453a-9c54-903722acb734 (facts) was prepared for execution. 2025-11-11 00:39:06.665076 | orchestrator | 2025-11-11 00:39:06 | INFO  | It takes a moment until task dbba48e4-d9b3-453a-9c54-903722acb734 (facts) has been started and output is visible here. 2025-11-11 00:39:19.494854 | orchestrator | 2025-11-11 00:39:19.494977 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-11 00:39:19.494994 | orchestrator | 2025-11-11 00:39:19.495006 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-11 00:39:19.495018 | orchestrator | Tuesday 11 November 2025 00:39:10 +0000 (0:00:00.264) 0:00:00.264 ****** 2025-11-11 00:39:19.495029 | orchestrator | ok: [testbed-manager] 2025-11-11 00:39:19.495042 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:39:19.495052 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:39:19.495089 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:39:19.495100 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:19.495111 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:19.495121 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:39:19.495131 | orchestrator | 2025-11-11 00:39:19.495144 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-11 00:39:19.495155 | orchestrator | Tuesday 11 November 2025 00:39:11 +0000 (0:00:01.106) 0:00:01.370 ****** 2025-11-11 00:39:19.495166 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:39:19.495177 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:39:19.495188 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:39:19.495198 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:39:19.495208 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:19.495219 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:19.495229 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:39:19.495240 | orchestrator | 2025-11-11 00:39:19.495250 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-11 00:39:19.495261 | orchestrator | 2025-11-11 00:39:19.495272 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:39:19.495282 | orchestrator | Tuesday 11 November 2025 00:39:13 +0000 (0:00:01.195) 0:00:02.566 ****** 2025-11-11 00:39:19.495293 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:39:19.495303 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:39:19.495314 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:39:19.495325 | orchestrator | ok: [testbed-manager] 2025-11-11 00:39:19.495335 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:19.495346 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:39:19.495356 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:19.495367 | orchestrator | 2025-11-11 00:39:19.495378 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-11 00:39:19.495390 | orchestrator | 2025-11-11 00:39:19.495403 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-11 00:39:19.495429 | orchestrator | Tuesday 11 November 2025 00:39:18 +0000 (0:00:05.539) 0:00:08.105 ****** 2025-11-11 00:39:19.495443 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:39:19.495455 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:39:19.495467 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:39:19.495480 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:39:19.495492 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:19.495505 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:19.495516 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:39:19.495528 | orchestrator | 2025-11-11 00:39:19.495540 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:39:19.495553 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495567 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495579 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495591 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495603 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495616 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495628 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:39:19.495640 | orchestrator | 2025-11-11 00:39:19.495686 | orchestrator | 2025-11-11 00:39:19.495701 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:39:19.495713 | orchestrator | Tuesday 11 November 2025 00:39:19 +0000 (0:00:00.489) 0:00:08.594 ****** 2025-11-11 00:39:19.495726 | orchestrator | =============================================================================== 2025-11-11 00:39:19.495738 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.54s 2025-11-11 00:39:19.495751 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-11-11 00:39:19.495763 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-11-11 00:39:19.495774 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-11-11 00:39:21.754634 | orchestrator | 2025-11-11 00:39:21 | INFO  | Task c7a09311-374a-4a77-ad8e-c68a270b6193 (ceph-configure-lvm-volumes) was prepared for execution. 2025-11-11 00:39:21.754779 | orchestrator | 2025-11-11 00:39:21 | INFO  | It takes a moment until task c7a09311-374a-4a77-ad8e-c68a270b6193 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-11-11 00:39:32.980619 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-11 00:39:32.980790 | orchestrator | 2.16.14 2025-11-11 00:39:32.980807 | orchestrator | 2025-11-11 00:39:32.980820 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-11 00:39:32.980832 | orchestrator | 2025-11-11 00:39:32.980846 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:39:32.980858 | orchestrator | Tuesday 11 November 2025 00:39:26 +0000 (0:00:00.303) 0:00:00.303 ****** 2025-11-11 00:39:32.980869 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-11 00:39:32.980880 | orchestrator | 2025-11-11 00:39:32.980891 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:39:32.980901 | orchestrator | Tuesday 11 November 2025 00:39:26 +0000 (0:00:00.233) 0:00:00.537 ****** 2025-11-11 00:39:32.980912 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:32.980923 | orchestrator | 2025-11-11 00:39:32.980933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.980944 | orchestrator | Tuesday 11 November 2025 00:39:26 +0000 (0:00:00.210) 0:00:00.748 ****** 2025-11-11 00:39:32.980955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-11 00:39:32.980966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-11 00:39:32.980976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-11 00:39:32.980987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-11 00:39:32.980997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-11 00:39:32.981008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-11 00:39:32.981018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-11 00:39:32.981029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-11 00:39:32.981040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-11 00:39:32.981050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-11 00:39:32.981070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-11 00:39:32.981081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-11 00:39:32.981092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-11 00:39:32.981102 | orchestrator | 2025-11-11 00:39:32.981113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981145 | orchestrator | Tuesday 11 November 2025 00:39:26 +0000 (0:00:00.429) 0:00:01.177 ****** 2025-11-11 00:39:32.981156 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981167 | orchestrator | 2025-11-11 00:39:32.981178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981189 | orchestrator | Tuesday 11 November 2025 00:39:27 +0000 (0:00:00.176) 0:00:01.354 ****** 2025-11-11 00:39:32.981199 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981210 | orchestrator | 2025-11-11 00:39:32.981221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981231 | orchestrator | Tuesday 11 November 2025 00:39:27 +0000 (0:00:00.191) 0:00:01.546 ****** 2025-11-11 00:39:32.981242 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981252 | orchestrator | 2025-11-11 00:39:32.981263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981278 | orchestrator | Tuesday 11 November 2025 00:39:27 +0000 (0:00:00.195) 0:00:01.742 ****** 2025-11-11 00:39:32.981289 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981300 | orchestrator | 2025-11-11 00:39:32.981310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981321 | orchestrator | Tuesday 11 November 2025 00:39:27 +0000 (0:00:00.193) 0:00:01.935 ****** 2025-11-11 00:39:32.981333 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981343 | orchestrator | 2025-11-11 00:39:32.981354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981365 | orchestrator | Tuesday 11 November 2025 00:39:27 +0000 (0:00:00.185) 0:00:02.120 ****** 2025-11-11 00:39:32.981375 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981386 | orchestrator | 2025-11-11 00:39:32.981397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981407 | orchestrator | Tuesday 11 November 2025 00:39:28 +0000 (0:00:00.179) 0:00:02.300 ****** 2025-11-11 00:39:32.981418 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981428 | orchestrator | 2025-11-11 00:39:32.981439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981450 | orchestrator | Tuesday 11 November 2025 00:39:28 +0000 (0:00:00.208) 0:00:02.509 ****** 2025-11-11 00:39:32.981461 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981471 | orchestrator | 2025-11-11 00:39:32.981482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981493 | orchestrator | Tuesday 11 November 2025 00:39:28 +0000 (0:00:00.195) 0:00:02.704 ****** 2025-11-11 00:39:32.981503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013) 2025-11-11 00:39:32.981515 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013) 2025-11-11 00:39:32.981526 | orchestrator | 2025-11-11 00:39:32.981536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981564 | orchestrator | Tuesday 11 November 2025 00:39:28 +0000 (0:00:00.376) 0:00:03.081 ****** 2025-11-11 00:39:32.981576 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882) 2025-11-11 00:39:32.981587 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882) 2025-11-11 00:39:32.981597 | orchestrator | 2025-11-11 00:39:32.981608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981619 | orchestrator | Tuesday 11 November 2025 00:39:29 +0000 (0:00:00.571) 0:00:03.653 ****** 2025-11-11 00:39:32.981629 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d) 2025-11-11 00:39:32.981640 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d) 2025-11-11 00:39:32.981670 | orchestrator | 2025-11-11 00:39:32.981682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981701 | orchestrator | Tuesday 11 November 2025 00:39:30 +0000 (0:00:00.602) 0:00:04.255 ****** 2025-11-11 00:39:32.981712 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f) 2025-11-11 00:39:32.981723 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f) 2025-11-11 00:39:32.981733 | orchestrator | 2025-11-11 00:39:32.981744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:32.981755 | orchestrator | Tuesday 11 November 2025 00:39:30 +0000 (0:00:00.812) 0:00:05.067 ****** 2025-11-11 00:39:32.981765 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:39:32.981775 | orchestrator | 2025-11-11 00:39:32.981792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.981802 | orchestrator | Tuesday 11 November 2025 00:39:31 +0000 (0:00:00.328) 0:00:05.396 ****** 2025-11-11 00:39:32.981813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-11 00:39:32.981823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-11 00:39:32.981834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-11 00:39:32.981844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-11 00:39:32.981855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-11 00:39:32.981865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-11 00:39:32.981875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-11 00:39:32.981886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-11 00:39:32.981896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-11 00:39:32.981907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-11 00:39:32.981917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-11 00:39:32.981927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-11 00:39:32.981938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-11 00:39:32.981948 | orchestrator | 2025-11-11 00:39:32.981959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.981970 | orchestrator | Tuesday 11 November 2025 00:39:31 +0000 (0:00:00.355) 0:00:05.751 ****** 2025-11-11 00:39:32.981980 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.981991 | orchestrator | 2025-11-11 00:39:32.982001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982012 | orchestrator | Tuesday 11 November 2025 00:39:31 +0000 (0:00:00.192) 0:00:05.944 ****** 2025-11-11 00:39:32.982085 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982097 | orchestrator | 2025-11-11 00:39:32.982108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982118 | orchestrator | Tuesday 11 November 2025 00:39:31 +0000 (0:00:00.190) 0:00:06.135 ****** 2025-11-11 00:39:32.982138 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982149 | orchestrator | 2025-11-11 00:39:32.982160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982171 | orchestrator | Tuesday 11 November 2025 00:39:32 +0000 (0:00:00.206) 0:00:06.342 ****** 2025-11-11 00:39:32.982182 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982192 | orchestrator | 2025-11-11 00:39:32.982203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982214 | orchestrator | Tuesday 11 November 2025 00:39:32 +0000 (0:00:00.191) 0:00:06.534 ****** 2025-11-11 00:39:32.982232 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982243 | orchestrator | 2025-11-11 00:39:32.982254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982264 | orchestrator | Tuesday 11 November 2025 00:39:32 +0000 (0:00:00.229) 0:00:06.763 ****** 2025-11-11 00:39:32.982275 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982285 | orchestrator | 2025-11-11 00:39:32.982296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:32.982307 | orchestrator | Tuesday 11 November 2025 00:39:32 +0000 (0:00:00.213) 0:00:06.977 ****** 2025-11-11 00:39:32.982318 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:32.982328 | orchestrator | 2025-11-11 00:39:32.982345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192276 | orchestrator | Tuesday 11 November 2025 00:39:32 +0000 (0:00:00.179) 0:00:07.156 ****** 2025-11-11 00:39:40.192401 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192418 | orchestrator | 2025-11-11 00:39:40.192431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192443 | orchestrator | Tuesday 11 November 2025 00:39:33 +0000 (0:00:00.191) 0:00:07.348 ****** 2025-11-11 00:39:40.192454 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-11 00:39:40.192466 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-11 00:39:40.192477 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-11 00:39:40.192488 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-11 00:39:40.192499 | orchestrator | 2025-11-11 00:39:40.192510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192520 | orchestrator | Tuesday 11 November 2025 00:39:34 +0000 (0:00:00.968) 0:00:08.316 ****** 2025-11-11 00:39:40.192531 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192542 | orchestrator | 2025-11-11 00:39:40.192553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192563 | orchestrator | Tuesday 11 November 2025 00:39:34 +0000 (0:00:00.189) 0:00:08.506 ****** 2025-11-11 00:39:40.192574 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192585 | orchestrator | 2025-11-11 00:39:40.192596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192607 | orchestrator | Tuesday 11 November 2025 00:39:34 +0000 (0:00:00.207) 0:00:08.713 ****** 2025-11-11 00:39:40.192618 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192628 | orchestrator | 2025-11-11 00:39:40.192639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:40.192702 | orchestrator | Tuesday 11 November 2025 00:39:34 +0000 (0:00:00.188) 0:00:08.902 ****** 2025-11-11 00:39:40.192716 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192726 | orchestrator | 2025-11-11 00:39:40.192737 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-11 00:39:40.192748 | orchestrator | Tuesday 11 November 2025 00:39:34 +0000 (0:00:00.195) 0:00:09.098 ****** 2025-11-11 00:39:40.192759 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-11-11 00:39:40.192770 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-11-11 00:39:40.192780 | orchestrator | 2025-11-11 00:39:40.192811 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-11 00:39:40.192825 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.180) 0:00:09.278 ****** 2025-11-11 00:39:40.192837 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192850 | orchestrator | 2025-11-11 00:39:40.192862 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-11 00:39:40.192874 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.135) 0:00:09.414 ****** 2025-11-11 00:39:40.192886 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192899 | orchestrator | 2025-11-11 00:39:40.192912 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-11 00:39:40.192946 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.129) 0:00:09.543 ****** 2025-11-11 00:39:40.192959 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.192971 | orchestrator | 2025-11-11 00:39:40.192984 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-11 00:39:40.192996 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.146) 0:00:09.690 ****** 2025-11-11 00:39:40.193009 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:40.193021 | orchestrator | 2025-11-11 00:39:40.193033 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-11 00:39:40.193045 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.137) 0:00:09.827 ****** 2025-11-11 00:39:40.193058 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01811ce3-d07c-5516-bfbb-fba58f4d4962'}}) 2025-11-11 00:39:40.193071 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}}) 2025-11-11 00:39:40.193083 | orchestrator | 2025-11-11 00:39:40.193095 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-11 00:39:40.193108 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.150) 0:00:09.978 ****** 2025-11-11 00:39:40.193120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01811ce3-d07c-5516-bfbb-fba58f4d4962'}})  2025-11-11 00:39:40.193141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}})  2025-11-11 00:39:40.193153 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193164 | orchestrator | 2025-11-11 00:39:40.193175 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-11 00:39:40.193186 | orchestrator | Tuesday 11 November 2025 00:39:35 +0000 (0:00:00.136) 0:00:10.114 ****** 2025-11-11 00:39:40.193196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01811ce3-d07c-5516-bfbb-fba58f4d4962'}})  2025-11-11 00:39:40.193207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}})  2025-11-11 00:39:40.193218 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193229 | orchestrator | 2025-11-11 00:39:40.193239 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-11 00:39:40.193250 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.322) 0:00:10.437 ****** 2025-11-11 00:39:40.193261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01811ce3-d07c-5516-bfbb-fba58f4d4962'}})  2025-11-11 00:39:40.193290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}})  2025-11-11 00:39:40.193302 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193313 | orchestrator | 2025-11-11 00:39:40.193323 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-11 00:39:40.193339 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.143) 0:00:10.581 ****** 2025-11-11 00:39:40.193351 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:40.193361 | orchestrator | 2025-11-11 00:39:40.193372 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-11 00:39:40.193383 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.141) 0:00:10.722 ****** 2025-11-11 00:39:40.193393 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:39:40.193404 | orchestrator | 2025-11-11 00:39:40.193415 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-11 00:39:40.193425 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.142) 0:00:10.865 ****** 2025-11-11 00:39:40.193436 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193447 | orchestrator | 2025-11-11 00:39:40.193457 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-11 00:39:40.193468 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.124) 0:00:10.989 ****** 2025-11-11 00:39:40.193487 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193497 | orchestrator | 2025-11-11 00:39:40.193508 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-11 00:39:40.193519 | orchestrator | Tuesday 11 November 2025 00:39:36 +0000 (0:00:00.127) 0:00:11.116 ****** 2025-11-11 00:39:40.193529 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193540 | orchestrator | 2025-11-11 00:39:40.193551 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-11 00:39:40.193561 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.147) 0:00:11.264 ****** 2025-11-11 00:39:40.193572 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:39:40.193583 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:39:40.193593 | orchestrator |  "sdb": { 2025-11-11 00:39:40.193604 | orchestrator |  "osd_lvm_uuid": "01811ce3-d07c-5516-bfbb-fba58f4d4962" 2025-11-11 00:39:40.193615 | orchestrator |  }, 2025-11-11 00:39:40.193626 | orchestrator |  "sdc": { 2025-11-11 00:39:40.193636 | orchestrator |  "osd_lvm_uuid": "d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2" 2025-11-11 00:39:40.193647 | orchestrator |  } 2025-11-11 00:39:40.193678 | orchestrator |  } 2025-11-11 00:39:40.193690 | orchestrator | } 2025-11-11 00:39:40.193701 | orchestrator | 2025-11-11 00:39:40.193711 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-11 00:39:40.193722 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.131) 0:00:11.396 ****** 2025-11-11 00:39:40.193733 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193743 | orchestrator | 2025-11-11 00:39:40.193754 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-11 00:39:40.193764 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.113) 0:00:11.509 ****** 2025-11-11 00:39:40.193775 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193786 | orchestrator | 2025-11-11 00:39:40.193797 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-11 00:39:40.193807 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.122) 0:00:11.632 ****** 2025-11-11 00:39:40.193818 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:39:40.193828 | orchestrator | 2025-11-11 00:39:40.193839 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-11 00:39:40.193849 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.139) 0:00:11.772 ****** 2025-11-11 00:39:40.193860 | orchestrator | changed: [testbed-node-3] => { 2025-11-11 00:39:40.193871 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-11 00:39:40.193882 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:39:40.193892 | orchestrator |  "sdb": { 2025-11-11 00:39:40.193903 | orchestrator |  "osd_lvm_uuid": "01811ce3-d07c-5516-bfbb-fba58f4d4962" 2025-11-11 00:39:40.193914 | orchestrator |  }, 2025-11-11 00:39:40.193925 | orchestrator |  "sdc": { 2025-11-11 00:39:40.193935 | orchestrator |  "osd_lvm_uuid": "d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2" 2025-11-11 00:39:40.193946 | orchestrator |  } 2025-11-11 00:39:40.193957 | orchestrator |  }, 2025-11-11 00:39:40.193967 | orchestrator |  "lvm_volumes": [ 2025-11-11 00:39:40.193978 | orchestrator |  { 2025-11-11 00:39:40.193989 | orchestrator |  "data": "osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962", 2025-11-11 00:39:40.193999 | orchestrator |  "data_vg": "ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962" 2025-11-11 00:39:40.194010 | orchestrator |  }, 2025-11-11 00:39:40.194084 | orchestrator |  { 2025-11-11 00:39:40.194096 | orchestrator |  "data": "osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2", 2025-11-11 00:39:40.194107 | orchestrator |  "data_vg": "ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2" 2025-11-11 00:39:40.194124 | orchestrator |  } 2025-11-11 00:39:40.194135 | orchestrator |  ] 2025-11-11 00:39:40.194146 | orchestrator |  } 2025-11-11 00:39:40.194164 | orchestrator | } 2025-11-11 00:39:40.194175 | orchestrator | 2025-11-11 00:39:40.194185 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-11 00:39:40.194196 | orchestrator | Tuesday 11 November 2025 00:39:37 +0000 (0:00:00.350) 0:00:12.122 ****** 2025-11-11 00:39:40.194207 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-11 00:39:40.194218 | orchestrator | 2025-11-11 00:39:40.194228 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-11 00:39:40.194239 | orchestrator | 2025-11-11 00:39:40.194250 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:39:40.194261 | orchestrator | Tuesday 11 November 2025 00:39:39 +0000 (0:00:01.765) 0:00:13.888 ****** 2025-11-11 00:39:40.194271 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-11 00:39:40.194282 | orchestrator | 2025-11-11 00:39:40.194292 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:39:40.194303 | orchestrator | Tuesday 11 November 2025 00:39:39 +0000 (0:00:00.236) 0:00:14.125 ****** 2025-11-11 00:39:40.194314 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:40.194325 | orchestrator | 2025-11-11 00:39:40.194342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688417 | orchestrator | Tuesday 11 November 2025 00:39:40 +0000 (0:00:00.247) 0:00:14.372 ****** 2025-11-11 00:39:47.688514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-11 00:39:47.688521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-11 00:39:47.688526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-11 00:39:47.688530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-11 00:39:47.688534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-11 00:39:47.688538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-11 00:39:47.688542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-11 00:39:47.688546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-11 00:39:47.688550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-11 00:39:47.688553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-11 00:39:47.688557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-11 00:39:47.688564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-11 00:39:47.688568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-11 00:39:47.688572 | orchestrator | 2025-11-11 00:39:47.688576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688580 | orchestrator | Tuesday 11 November 2025 00:39:40 +0000 (0:00:00.369) 0:00:14.741 ****** 2025-11-11 00:39:47.688584 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688589 | orchestrator | 2025-11-11 00:39:47.688593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688597 | orchestrator | Tuesday 11 November 2025 00:39:40 +0000 (0:00:00.195) 0:00:14.936 ****** 2025-11-11 00:39:47.688601 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688605 | orchestrator | 2025-11-11 00:39:47.688608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688612 | orchestrator | Tuesday 11 November 2025 00:39:40 +0000 (0:00:00.194) 0:00:15.130 ****** 2025-11-11 00:39:47.688616 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688620 | orchestrator | 2025-11-11 00:39:47.688624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688649 | orchestrator | Tuesday 11 November 2025 00:39:41 +0000 (0:00:00.198) 0:00:15.328 ****** 2025-11-11 00:39:47.688707 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688711 | orchestrator | 2025-11-11 00:39:47.688715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688719 | orchestrator | Tuesday 11 November 2025 00:39:41 +0000 (0:00:00.193) 0:00:15.522 ****** 2025-11-11 00:39:47.688723 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688727 | orchestrator | 2025-11-11 00:39:47.688731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688735 | orchestrator | Tuesday 11 November 2025 00:39:41 +0000 (0:00:00.524) 0:00:16.046 ****** 2025-11-11 00:39:47.688738 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688742 | orchestrator | 2025-11-11 00:39:47.688761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688765 | orchestrator | Tuesday 11 November 2025 00:39:42 +0000 (0:00:00.195) 0:00:16.241 ****** 2025-11-11 00:39:47.688769 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688772 | orchestrator | 2025-11-11 00:39:47.688776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688780 | orchestrator | Tuesday 11 November 2025 00:39:42 +0000 (0:00:00.191) 0:00:16.433 ****** 2025-11-11 00:39:47.688784 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688787 | orchestrator | 2025-11-11 00:39:47.688791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688795 | orchestrator | Tuesday 11 November 2025 00:39:42 +0000 (0:00:00.205) 0:00:16.638 ****** 2025-11-11 00:39:47.688799 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5) 2025-11-11 00:39:47.688804 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5) 2025-11-11 00:39:47.688808 | orchestrator | 2025-11-11 00:39:47.688812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688815 | orchestrator | Tuesday 11 November 2025 00:39:42 +0000 (0:00:00.401) 0:00:17.040 ****** 2025-11-11 00:39:47.688819 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34) 2025-11-11 00:39:47.688823 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34) 2025-11-11 00:39:47.688827 | orchestrator | 2025-11-11 00:39:47.688830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688834 | orchestrator | Tuesday 11 November 2025 00:39:43 +0000 (0:00:00.400) 0:00:17.441 ****** 2025-11-11 00:39:47.688838 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3) 2025-11-11 00:39:47.688842 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3) 2025-11-11 00:39:47.688845 | orchestrator | 2025-11-11 00:39:47.688849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688865 | orchestrator | Tuesday 11 November 2025 00:39:43 +0000 (0:00:00.418) 0:00:17.859 ****** 2025-11-11 00:39:47.688869 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860) 2025-11-11 00:39:47.688873 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860) 2025-11-11 00:39:47.688877 | orchestrator | 2025-11-11 00:39:47.688881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:47.688885 | orchestrator | Tuesday 11 November 2025 00:39:44 +0000 (0:00:00.390) 0:00:18.249 ****** 2025-11-11 00:39:47.688888 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:39:47.688892 | orchestrator | 2025-11-11 00:39:47.688896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.688900 | orchestrator | Tuesday 11 November 2025 00:39:44 +0000 (0:00:00.311) 0:00:18.561 ****** 2025-11-11 00:39:47.688910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-11 00:39:47.688914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-11 00:39:47.688917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-11 00:39:47.688921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-11 00:39:47.688925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-11 00:39:47.688928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-11 00:39:47.688932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-11 00:39:47.688936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-11 00:39:47.688939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-11 00:39:47.688943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-11 00:39:47.688947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-11 00:39:47.688950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-11 00:39:47.688954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-11 00:39:47.688957 | orchestrator | 2025-11-11 00:39:47.688961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.688965 | orchestrator | Tuesday 11 November 2025 00:39:44 +0000 (0:00:00.395) 0:00:18.956 ****** 2025-11-11 00:39:47.688970 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688974 | orchestrator | 2025-11-11 00:39:47.688978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.688985 | orchestrator | Tuesday 11 November 2025 00:39:45 +0000 (0:00:00.553) 0:00:19.510 ****** 2025-11-11 00:39:47.688989 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.688994 | orchestrator | 2025-11-11 00:39:47.688998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689002 | orchestrator | Tuesday 11 November 2025 00:39:45 +0000 (0:00:00.189) 0:00:19.700 ****** 2025-11-11 00:39:47.689006 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689010 | orchestrator | 2025-11-11 00:39:47.689015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689019 | orchestrator | Tuesday 11 November 2025 00:39:45 +0000 (0:00:00.192) 0:00:19.893 ****** 2025-11-11 00:39:47.689023 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689027 | orchestrator | 2025-11-11 00:39:47.689031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689036 | orchestrator | Tuesday 11 November 2025 00:39:45 +0000 (0:00:00.185) 0:00:20.079 ****** 2025-11-11 00:39:47.689040 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689044 | orchestrator | 2025-11-11 00:39:47.689048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689053 | orchestrator | Tuesday 11 November 2025 00:39:46 +0000 (0:00:00.203) 0:00:20.282 ****** 2025-11-11 00:39:47.689057 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689061 | orchestrator | 2025-11-11 00:39:47.689065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689070 | orchestrator | Tuesday 11 November 2025 00:39:46 +0000 (0:00:00.197) 0:00:20.479 ****** 2025-11-11 00:39:47.689074 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689078 | orchestrator | 2025-11-11 00:39:47.689082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689086 | orchestrator | Tuesday 11 November 2025 00:39:46 +0000 (0:00:00.185) 0:00:20.665 ****** 2025-11-11 00:39:47.689094 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:47.689098 | orchestrator | 2025-11-11 00:39:47.689102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689106 | orchestrator | Tuesday 11 November 2025 00:39:46 +0000 (0:00:00.223) 0:00:20.889 ****** 2025-11-11 00:39:47.689110 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-11 00:39:47.689116 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-11 00:39:47.689120 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-11 00:39:47.689124 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-11 00:39:47.689129 | orchestrator | 2025-11-11 00:39:47.689133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:47.689137 | orchestrator | Tuesday 11 November 2025 00:39:47 +0000 (0:00:00.784) 0:00:21.673 ****** 2025-11-11 00:39:47.689141 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.878697 | orchestrator | 2025-11-11 00:39:53.878823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:53.878840 | orchestrator | Tuesday 11 November 2025 00:39:47 +0000 (0:00:00.196) 0:00:21.870 ****** 2025-11-11 00:39:53.878852 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.878864 | orchestrator | 2025-11-11 00:39:53.878876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:53.878887 | orchestrator | Tuesday 11 November 2025 00:39:47 +0000 (0:00:00.190) 0:00:22.060 ****** 2025-11-11 00:39:53.878898 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.878909 | orchestrator | 2025-11-11 00:39:53.878920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:39:53.878931 | orchestrator | Tuesday 11 November 2025 00:39:48 +0000 (0:00:00.192) 0:00:22.253 ****** 2025-11-11 00:39:53.878941 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.878952 | orchestrator | 2025-11-11 00:39:53.878963 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-11 00:39:53.878973 | orchestrator | Tuesday 11 November 2025 00:39:48 +0000 (0:00:00.584) 0:00:22.838 ****** 2025-11-11 00:39:53.878985 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-11-11 00:39:53.878995 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-11-11 00:39:53.879006 | orchestrator | 2025-11-11 00:39:53.879017 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-11 00:39:53.879027 | orchestrator | Tuesday 11 November 2025 00:39:48 +0000 (0:00:00.176) 0:00:23.014 ****** 2025-11-11 00:39:53.879038 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879049 | orchestrator | 2025-11-11 00:39:53.879061 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-11 00:39:53.879071 | orchestrator | Tuesday 11 November 2025 00:39:48 +0000 (0:00:00.140) 0:00:23.154 ****** 2025-11-11 00:39:53.879082 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879093 | orchestrator | 2025-11-11 00:39:53.879104 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-11 00:39:53.879114 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.130) 0:00:23.285 ****** 2025-11-11 00:39:53.879125 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879136 | orchestrator | 2025-11-11 00:39:53.879146 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-11 00:39:53.879157 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.125) 0:00:23.410 ****** 2025-11-11 00:39:53.879168 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:53.879180 | orchestrator | 2025-11-11 00:39:53.879191 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-11 00:39:53.879202 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.133) 0:00:23.544 ****** 2025-11-11 00:39:53.879214 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}}) 2025-11-11 00:39:53.879225 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fda84b1-4127-5701-96e6-fb2774ba2cbf'}}) 2025-11-11 00:39:53.879264 | orchestrator | 2025-11-11 00:39:53.879275 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-11 00:39:53.879286 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.164) 0:00:23.709 ****** 2025-11-11 00:39:53.879297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}})  2025-11-11 00:39:53.879331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fda84b1-4127-5701-96e6-fb2774ba2cbf'}})  2025-11-11 00:39:53.879343 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879353 | orchestrator | 2025-11-11 00:39:53.879364 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-11 00:39:53.879375 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.142) 0:00:23.851 ****** 2025-11-11 00:39:53.879386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}})  2025-11-11 00:39:53.879397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fda84b1-4127-5701-96e6-fb2774ba2cbf'}})  2025-11-11 00:39:53.879407 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879418 | orchestrator | 2025-11-11 00:39:53.879429 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-11 00:39:53.879439 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.150) 0:00:24.002 ****** 2025-11-11 00:39:53.879451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}})  2025-11-11 00:39:53.879462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fda84b1-4127-5701-96e6-fb2774ba2cbf'}})  2025-11-11 00:39:53.879472 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879483 | orchestrator | 2025-11-11 00:39:53.879494 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-11 00:39:53.879505 | orchestrator | Tuesday 11 November 2025 00:39:49 +0000 (0:00:00.144) 0:00:24.147 ****** 2025-11-11 00:39:53.879516 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:53.879526 | orchestrator | 2025-11-11 00:39:53.879537 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-11 00:39:53.879548 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.131) 0:00:24.278 ****** 2025-11-11 00:39:53.879559 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:39:53.879570 | orchestrator | 2025-11-11 00:39:53.879582 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-11 00:39:53.879601 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.136) 0:00:24.414 ****** 2025-11-11 00:39:53.879644 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879693 | orchestrator | 2025-11-11 00:39:53.879706 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-11 00:39:53.879717 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.303) 0:00:24.718 ****** 2025-11-11 00:39:53.879727 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879738 | orchestrator | 2025-11-11 00:39:53.879749 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-11 00:39:53.879760 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.135) 0:00:24.854 ****** 2025-11-11 00:39:53.879771 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879781 | orchestrator | 2025-11-11 00:39:53.879792 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-11 00:39:53.879803 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.139) 0:00:24.993 ****** 2025-11-11 00:39:53.879814 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:39:53.879825 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:39:53.879836 | orchestrator |  "sdb": { 2025-11-11 00:39:53.879847 | orchestrator |  "osd_lvm_uuid": "1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8" 2025-11-11 00:39:53.879869 | orchestrator |  }, 2025-11-11 00:39:53.879880 | orchestrator |  "sdc": { 2025-11-11 00:39:53.879891 | orchestrator |  "osd_lvm_uuid": "1fda84b1-4127-5701-96e6-fb2774ba2cbf" 2025-11-11 00:39:53.879902 | orchestrator |  } 2025-11-11 00:39:53.879913 | orchestrator |  } 2025-11-11 00:39:53.879924 | orchestrator | } 2025-11-11 00:39:53.879935 | orchestrator | 2025-11-11 00:39:53.879946 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-11 00:39:53.879957 | orchestrator | Tuesday 11 November 2025 00:39:50 +0000 (0:00:00.151) 0:00:25.144 ****** 2025-11-11 00:39:53.879967 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.879978 | orchestrator | 2025-11-11 00:39:53.879989 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-11 00:39:53.880000 | orchestrator | Tuesday 11 November 2025 00:39:51 +0000 (0:00:00.150) 0:00:25.295 ****** 2025-11-11 00:39:53.880010 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.880021 | orchestrator | 2025-11-11 00:39:53.880032 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-11 00:39:53.880043 | orchestrator | Tuesday 11 November 2025 00:39:51 +0000 (0:00:00.113) 0:00:25.408 ****** 2025-11-11 00:39:53.880054 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:39:53.880064 | orchestrator | 2025-11-11 00:39:53.880075 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-11 00:39:53.880086 | orchestrator | Tuesday 11 November 2025 00:39:51 +0000 (0:00:00.133) 0:00:25.541 ****** 2025-11-11 00:39:53.880096 | orchestrator | changed: [testbed-node-4] => { 2025-11-11 00:39:53.880107 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-11 00:39:53.880119 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:39:53.880130 | orchestrator |  "sdb": { 2025-11-11 00:39:53.880141 | orchestrator |  "osd_lvm_uuid": "1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8" 2025-11-11 00:39:53.880152 | orchestrator |  }, 2025-11-11 00:39:53.880163 | orchestrator |  "sdc": { 2025-11-11 00:39:53.880174 | orchestrator |  "osd_lvm_uuid": "1fda84b1-4127-5701-96e6-fb2774ba2cbf" 2025-11-11 00:39:53.880185 | orchestrator |  } 2025-11-11 00:39:53.880195 | orchestrator |  }, 2025-11-11 00:39:53.880206 | orchestrator |  "lvm_volumes": [ 2025-11-11 00:39:53.880217 | orchestrator |  { 2025-11-11 00:39:53.880228 | orchestrator |  "data": "osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8", 2025-11-11 00:39:53.880238 | orchestrator |  "data_vg": "ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8" 2025-11-11 00:39:53.880249 | orchestrator |  }, 2025-11-11 00:39:53.880260 | orchestrator |  { 2025-11-11 00:39:53.880271 | orchestrator |  "data": "osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf", 2025-11-11 00:39:53.880281 | orchestrator |  "data_vg": "ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf" 2025-11-11 00:39:53.880292 | orchestrator |  } 2025-11-11 00:39:53.880303 | orchestrator |  ] 2025-11-11 00:39:53.880314 | orchestrator |  } 2025-11-11 00:39:53.880324 | orchestrator | } 2025-11-11 00:39:53.880335 | orchestrator | 2025-11-11 00:39:53.880346 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-11 00:39:53.880357 | orchestrator | Tuesday 11 November 2025 00:39:51 +0000 (0:00:00.214) 0:00:25.756 ****** 2025-11-11 00:39:53.880368 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-11 00:39:53.880378 | orchestrator | 2025-11-11 00:39:53.880389 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-11 00:39:53.880400 | orchestrator | 2025-11-11 00:39:53.880411 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:39:53.880422 | orchestrator | Tuesday 11 November 2025 00:39:52 +0000 (0:00:01.137) 0:00:26.894 ****** 2025-11-11 00:39:53.880433 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-11 00:39:53.880444 | orchestrator | 2025-11-11 00:39:53.880455 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:39:53.880479 | orchestrator | Tuesday 11 November 2025 00:39:53 +0000 (0:00:00.569) 0:00:27.464 ****** 2025-11-11 00:39:53.880490 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:39:53.880501 | orchestrator | 2025-11-11 00:39:53.880512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:39:53.880522 | orchestrator | Tuesday 11 November 2025 00:39:53 +0000 (0:00:00.236) 0:00:27.700 ****** 2025-11-11 00:39:53.880533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-11 00:39:53.880544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-11 00:39:53.880554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-11 00:39:53.880565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-11 00:39:53.880575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-11 00:39:53.880593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-11 00:40:01.376398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-11 00:40:01.376523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-11 00:40:01.376538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-11 00:40:01.376551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-11 00:40:01.376562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-11 00:40:01.376573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-11 00:40:01.376584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-11 00:40:01.376595 | orchestrator | 2025-11-11 00:40:01.376607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376620 | orchestrator | Tuesday 11 November 2025 00:39:53 +0000 (0:00:00.355) 0:00:28.056 ****** 2025-11-11 00:40:01.376631 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376643 | orchestrator | 2025-11-11 00:40:01.376702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376714 | orchestrator | Tuesday 11 November 2025 00:39:54 +0000 (0:00:00.191) 0:00:28.248 ****** 2025-11-11 00:40:01.376725 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376736 | orchestrator | 2025-11-11 00:40:01.376747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376757 | orchestrator | Tuesday 11 November 2025 00:39:54 +0000 (0:00:00.184) 0:00:28.432 ****** 2025-11-11 00:40:01.376768 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376779 | orchestrator | 2025-11-11 00:40:01.376790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376800 | orchestrator | Tuesday 11 November 2025 00:39:54 +0000 (0:00:00.181) 0:00:28.614 ****** 2025-11-11 00:40:01.376811 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376822 | orchestrator | 2025-11-11 00:40:01.376833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376843 | orchestrator | Tuesday 11 November 2025 00:39:54 +0000 (0:00:00.194) 0:00:28.809 ****** 2025-11-11 00:40:01.376854 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376865 | orchestrator | 2025-11-11 00:40:01.376875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376886 | orchestrator | Tuesday 11 November 2025 00:39:54 +0000 (0:00:00.209) 0:00:29.018 ****** 2025-11-11 00:40:01.376897 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376908 | orchestrator | 2025-11-11 00:40:01.376922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.376959 | orchestrator | Tuesday 11 November 2025 00:39:55 +0000 (0:00:00.195) 0:00:29.214 ****** 2025-11-11 00:40:01.376972 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.376985 | orchestrator | 2025-11-11 00:40:01.376998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377010 | orchestrator | Tuesday 11 November 2025 00:39:55 +0000 (0:00:00.183) 0:00:29.397 ****** 2025-11-11 00:40:01.377022 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377035 | orchestrator | 2025-11-11 00:40:01.377049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377062 | orchestrator | Tuesday 11 November 2025 00:39:55 +0000 (0:00:00.189) 0:00:29.587 ****** 2025-11-11 00:40:01.377074 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46) 2025-11-11 00:40:01.377088 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46) 2025-11-11 00:40:01.377101 | orchestrator | 2025-11-11 00:40:01.377114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377127 | orchestrator | Tuesday 11 November 2025 00:39:56 +0000 (0:00:00.767) 0:00:30.355 ****** 2025-11-11 00:40:01.377140 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e) 2025-11-11 00:40:01.377152 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e) 2025-11-11 00:40:01.377165 | orchestrator | 2025-11-11 00:40:01.377177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377190 | orchestrator | Tuesday 11 November 2025 00:39:56 +0000 (0:00:00.419) 0:00:30.774 ****** 2025-11-11 00:40:01.377203 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7) 2025-11-11 00:40:01.377216 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7) 2025-11-11 00:40:01.377228 | orchestrator | 2025-11-11 00:40:01.377240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377253 | orchestrator | Tuesday 11 November 2025 00:39:56 +0000 (0:00:00.402) 0:00:31.177 ****** 2025-11-11 00:40:01.377265 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c) 2025-11-11 00:40:01.377276 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c) 2025-11-11 00:40:01.377287 | orchestrator | 2025-11-11 00:40:01.377297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:40:01.377308 | orchestrator | Tuesday 11 November 2025 00:39:57 +0000 (0:00:00.405) 0:00:31.583 ****** 2025-11-11 00:40:01.377318 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:40:01.377329 | orchestrator | 2025-11-11 00:40:01.377340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377368 | orchestrator | Tuesday 11 November 2025 00:39:57 +0000 (0:00:00.311) 0:00:31.894 ****** 2025-11-11 00:40:01.377379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-11 00:40:01.377390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-11 00:40:01.377401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-11 00:40:01.377411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-11 00:40:01.377422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-11 00:40:01.377450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-11 00:40:01.377463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-11 00:40:01.377474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-11 00:40:01.377494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-11 00:40:01.377504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-11 00:40:01.377515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-11 00:40:01.377525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-11 00:40:01.377536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-11 00:40:01.377547 | orchestrator | 2025-11-11 00:40:01.377557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377568 | orchestrator | Tuesday 11 November 2025 00:39:58 +0000 (0:00:00.369) 0:00:32.264 ****** 2025-11-11 00:40:01.377579 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377589 | orchestrator | 2025-11-11 00:40:01.377600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377610 | orchestrator | Tuesday 11 November 2025 00:39:58 +0000 (0:00:00.192) 0:00:32.457 ****** 2025-11-11 00:40:01.377621 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377632 | orchestrator | 2025-11-11 00:40:01.377642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377686 | orchestrator | Tuesday 11 November 2025 00:39:58 +0000 (0:00:00.207) 0:00:32.664 ****** 2025-11-11 00:40:01.377699 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377709 | orchestrator | 2025-11-11 00:40:01.377720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377731 | orchestrator | Tuesday 11 November 2025 00:39:58 +0000 (0:00:00.191) 0:00:32.856 ****** 2025-11-11 00:40:01.377742 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377752 | orchestrator | 2025-11-11 00:40:01.377763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377773 | orchestrator | Tuesday 11 November 2025 00:39:58 +0000 (0:00:00.181) 0:00:33.038 ****** 2025-11-11 00:40:01.377784 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377795 | orchestrator | 2025-11-11 00:40:01.377805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377816 | orchestrator | Tuesday 11 November 2025 00:39:59 +0000 (0:00:00.193) 0:00:33.231 ****** 2025-11-11 00:40:01.377826 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377837 | orchestrator | 2025-11-11 00:40:01.377847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377858 | orchestrator | Tuesday 11 November 2025 00:39:59 +0000 (0:00:00.543) 0:00:33.775 ****** 2025-11-11 00:40:01.377869 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377879 | orchestrator | 2025-11-11 00:40:01.377890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377900 | orchestrator | Tuesday 11 November 2025 00:39:59 +0000 (0:00:00.198) 0:00:33.974 ****** 2025-11-11 00:40:01.377911 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.377921 | orchestrator | 2025-11-11 00:40:01.377932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.377943 | orchestrator | Tuesday 11 November 2025 00:39:59 +0000 (0:00:00.202) 0:00:34.176 ****** 2025-11-11 00:40:01.377953 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-11 00:40:01.377964 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-11 00:40:01.377975 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-11 00:40:01.377985 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-11 00:40:01.377996 | orchestrator | 2025-11-11 00:40:01.378006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.378079 | orchestrator | Tuesday 11 November 2025 00:40:00 +0000 (0:00:00.612) 0:00:34.788 ****** 2025-11-11 00:40:01.378094 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.378112 | orchestrator | 2025-11-11 00:40:01.378123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.378134 | orchestrator | Tuesday 11 November 2025 00:40:00 +0000 (0:00:00.185) 0:00:34.974 ****** 2025-11-11 00:40:01.378144 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.378155 | orchestrator | 2025-11-11 00:40:01.378166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.378176 | orchestrator | Tuesday 11 November 2025 00:40:00 +0000 (0:00:00.188) 0:00:35.162 ****** 2025-11-11 00:40:01.378187 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.378197 | orchestrator | 2025-11-11 00:40:01.378208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:40:01.378219 | orchestrator | Tuesday 11 November 2025 00:40:01 +0000 (0:00:00.194) 0:00:35.356 ****** 2025-11-11 00:40:01.378229 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:01.378240 | orchestrator | 2025-11-11 00:40:01.378259 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-11 00:40:05.389617 | orchestrator | Tuesday 11 November 2025 00:40:01 +0000 (0:00:00.200) 0:00:35.557 ****** 2025-11-11 00:40:05.389764 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-11-11 00:40:05.389779 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-11-11 00:40:05.389791 | orchestrator | 2025-11-11 00:40:05.389803 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-11 00:40:05.389815 | orchestrator | Tuesday 11 November 2025 00:40:01 +0000 (0:00:00.198) 0:00:35.755 ****** 2025-11-11 00:40:05.389826 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.389836 | orchestrator | 2025-11-11 00:40:05.389847 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-11 00:40:05.389858 | orchestrator | Tuesday 11 November 2025 00:40:01 +0000 (0:00:00.140) 0:00:35.895 ****** 2025-11-11 00:40:05.389868 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.389879 | orchestrator | 2025-11-11 00:40:05.389890 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-11 00:40:05.389900 | orchestrator | Tuesday 11 November 2025 00:40:01 +0000 (0:00:00.127) 0:00:36.022 ****** 2025-11-11 00:40:05.389911 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.389921 | orchestrator | 2025-11-11 00:40:05.389932 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-11 00:40:05.389943 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.307) 0:00:36.330 ****** 2025-11-11 00:40:05.389953 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:40:05.389964 | orchestrator | 2025-11-11 00:40:05.389976 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-11 00:40:05.389987 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.129) 0:00:36.460 ****** 2025-11-11 00:40:05.389998 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af11c135-cf10-5d68-b776-281fb5d39e8e'}}) 2025-11-11 00:40:05.390009 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1515626-32f0-5abe-9383-a4f06f352cf6'}}) 2025-11-11 00:40:05.390072 | orchestrator | 2025-11-11 00:40:05.390084 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-11 00:40:05.390095 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.155) 0:00:36.616 ****** 2025-11-11 00:40:05.390107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af11c135-cf10-5d68-b776-281fb5d39e8e'}})  2025-11-11 00:40:05.390119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1515626-32f0-5abe-9383-a4f06f352cf6'}})  2025-11-11 00:40:05.390130 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390141 | orchestrator | 2025-11-11 00:40:05.390152 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-11 00:40:05.390165 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.137) 0:00:36.753 ****** 2025-11-11 00:40:05.390202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af11c135-cf10-5d68-b776-281fb5d39e8e'}})  2025-11-11 00:40:05.390215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1515626-32f0-5abe-9383-a4f06f352cf6'}})  2025-11-11 00:40:05.390227 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390240 | orchestrator | 2025-11-11 00:40:05.390253 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-11 00:40:05.390265 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.142) 0:00:36.896 ****** 2025-11-11 00:40:05.390293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af11c135-cf10-5d68-b776-281fb5d39e8e'}})  2025-11-11 00:40:05.390306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1515626-32f0-5abe-9383-a4f06f352cf6'}})  2025-11-11 00:40:05.390319 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390331 | orchestrator | 2025-11-11 00:40:05.390344 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-11 00:40:05.390357 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.150) 0:00:37.046 ****** 2025-11-11 00:40:05.390370 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:40:05.390383 | orchestrator | 2025-11-11 00:40:05.390396 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-11 00:40:05.390408 | orchestrator | Tuesday 11 November 2025 00:40:02 +0000 (0:00:00.118) 0:00:37.165 ****** 2025-11-11 00:40:05.390421 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:40:05.390433 | orchestrator | 2025-11-11 00:40:05.390446 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-11 00:40:05.390458 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.134) 0:00:37.300 ****** 2025-11-11 00:40:05.390470 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390483 | orchestrator | 2025-11-11 00:40:05.390496 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-11 00:40:05.390509 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.126) 0:00:37.426 ****** 2025-11-11 00:40:05.390521 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390532 | orchestrator | 2025-11-11 00:40:05.390543 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-11 00:40:05.390553 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.121) 0:00:37.548 ****** 2025-11-11 00:40:05.390564 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390575 | orchestrator | 2025-11-11 00:40:05.390586 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-11 00:40:05.390596 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.129) 0:00:37.677 ****** 2025-11-11 00:40:05.390607 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:40:05.390618 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:40:05.390629 | orchestrator |  "sdb": { 2025-11-11 00:40:05.390680 | orchestrator |  "osd_lvm_uuid": "af11c135-cf10-5d68-b776-281fb5d39e8e" 2025-11-11 00:40:05.390695 | orchestrator |  }, 2025-11-11 00:40:05.390706 | orchestrator |  "sdc": { 2025-11-11 00:40:05.390717 | orchestrator |  "osd_lvm_uuid": "a1515626-32f0-5abe-9383-a4f06f352cf6" 2025-11-11 00:40:05.390728 | orchestrator |  } 2025-11-11 00:40:05.390739 | orchestrator |  } 2025-11-11 00:40:05.390749 | orchestrator | } 2025-11-11 00:40:05.390760 | orchestrator | 2025-11-11 00:40:05.390771 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-11 00:40:05.390782 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.124) 0:00:37.802 ****** 2025-11-11 00:40:05.390793 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390804 | orchestrator | 2025-11-11 00:40:05.390815 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-11 00:40:05.390826 | orchestrator | Tuesday 11 November 2025 00:40:03 +0000 (0:00:00.115) 0:00:37.918 ****** 2025-11-11 00:40:05.390845 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390856 | orchestrator | 2025-11-11 00:40:05.390867 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-11 00:40:05.390878 | orchestrator | Tuesday 11 November 2025 00:40:04 +0000 (0:00:00.307) 0:00:38.226 ****** 2025-11-11 00:40:05.390889 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:40:05.390899 | orchestrator | 2025-11-11 00:40:05.390910 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-11 00:40:05.390921 | orchestrator | Tuesday 11 November 2025 00:40:04 +0000 (0:00:00.125) 0:00:38.351 ****** 2025-11-11 00:40:05.390932 | orchestrator | changed: [testbed-node-5] => { 2025-11-11 00:40:05.390943 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-11 00:40:05.390954 | orchestrator |  "ceph_osd_devices": { 2025-11-11 00:40:05.390965 | orchestrator |  "sdb": { 2025-11-11 00:40:05.390976 | orchestrator |  "osd_lvm_uuid": "af11c135-cf10-5d68-b776-281fb5d39e8e" 2025-11-11 00:40:05.390987 | orchestrator |  }, 2025-11-11 00:40:05.390998 | orchestrator |  "sdc": { 2025-11-11 00:40:05.391009 | orchestrator |  "osd_lvm_uuid": "a1515626-32f0-5abe-9383-a4f06f352cf6" 2025-11-11 00:40:05.391020 | orchestrator |  } 2025-11-11 00:40:05.391031 | orchestrator |  }, 2025-11-11 00:40:05.391042 | orchestrator |  "lvm_volumes": [ 2025-11-11 00:40:05.391052 | orchestrator |  { 2025-11-11 00:40:05.391063 | orchestrator |  "data": "osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e", 2025-11-11 00:40:05.391074 | orchestrator |  "data_vg": "ceph-af11c135-cf10-5d68-b776-281fb5d39e8e" 2025-11-11 00:40:05.391085 | orchestrator |  }, 2025-11-11 00:40:05.391096 | orchestrator |  { 2025-11-11 00:40:05.391107 | orchestrator |  "data": "osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6", 2025-11-11 00:40:05.391118 | orchestrator |  "data_vg": "ceph-a1515626-32f0-5abe-9383-a4f06f352cf6" 2025-11-11 00:40:05.391129 | orchestrator |  } 2025-11-11 00:40:05.391144 | orchestrator |  ] 2025-11-11 00:40:05.391156 | orchestrator |  } 2025-11-11 00:40:05.391167 | orchestrator | } 2025-11-11 00:40:05.391177 | orchestrator | 2025-11-11 00:40:05.391188 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-11 00:40:05.391199 | orchestrator | Tuesday 11 November 2025 00:40:04 +0000 (0:00:00.183) 0:00:38.534 ****** 2025-11-11 00:40:05.391210 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-11 00:40:05.391221 | orchestrator | 2025-11-11 00:40:05.391232 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:40:05.391243 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-11 00:40:05.391255 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-11 00:40:05.391266 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-11 00:40:05.391277 | orchestrator | 2025-11-11 00:40:05.391288 | orchestrator | 2025-11-11 00:40:05.391299 | orchestrator | 2025-11-11 00:40:05.391309 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:40:05.391320 | orchestrator | Tuesday 11 November 2025 00:40:05 +0000 (0:00:01.018) 0:00:39.553 ****** 2025-11-11 00:40:05.391331 | orchestrator | =============================================================================== 2025-11-11 00:40:05.391342 | orchestrator | Write configuration file ------------------------------------------------ 3.92s 2025-11-11 00:40:05.391352 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-11-11 00:40:05.391363 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-11-11 00:40:05.391374 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.04s 2025-11-11 00:40:05.391391 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-11-11 00:40:05.391402 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-11-11 00:40:05.391413 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-11-11 00:40:05.391423 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-11-11 00:40:05.391434 | orchestrator | Print configuration data ------------------------------------------------ 0.75s 2025-11-11 00:40:05.391445 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-11-11 00:40:05.391456 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2025-11-11 00:40:05.391466 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-11-11 00:40:05.391477 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-11-11 00:40:05.391495 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-11-11 00:40:05.688571 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.58s 2025-11-11 00:40:05.688636 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-11-11 00:40:05.688644 | orchestrator | Set DB devices config data ---------------------------------------------- 0.55s 2025-11-11 00:40:05.688680 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.55s 2025-11-11 00:40:05.688688 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-11-11 00:40:05.688694 | orchestrator | Print DB devices -------------------------------------------------------- 0.54s 2025-11-11 00:40:28.127281 | orchestrator | 2025-11-11 00:40:28 | INFO  | Task f477a80e-8c5a-4fc8-b2a8-d81e356d1f9a (sync inventory) is running in background. Output coming soon. 2025-11-11 00:40:51.859717 | orchestrator | 2025-11-11 00:40:29 | INFO  | Starting group_vars file reorganization 2025-11-11 00:40:51.859861 | orchestrator | 2025-11-11 00:40:29 | INFO  | Moved 0 file(s) to their respective directories 2025-11-11 00:40:51.859878 | orchestrator | 2025-11-11 00:40:29 | INFO  | Group_vars file reorganization completed 2025-11-11 00:40:51.859889 | orchestrator | 2025-11-11 00:40:32 | INFO  | Starting variable preparation from inventory 2025-11-11 00:40:51.859901 | orchestrator | 2025-11-11 00:40:34 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-11-11 00:40:51.859912 | orchestrator | 2025-11-11 00:40:34 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-11-11 00:40:51.859950 | orchestrator | 2025-11-11 00:40:34 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-11-11 00:40:51.859962 | orchestrator | 2025-11-11 00:40:34 | INFO  | 3 file(s) written, 6 host(s) processed 2025-11-11 00:40:51.859973 | orchestrator | 2025-11-11 00:40:34 | INFO  | Variable preparation completed 2025-11-11 00:40:51.859985 | orchestrator | 2025-11-11 00:40:36 | INFO  | Starting inventory overwrite handling 2025-11-11 00:40:51.860001 | orchestrator | 2025-11-11 00:40:36 | INFO  | Handling group overwrites in 99-overwrite 2025-11-11 00:40:51.860012 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group frr:children from 60-generic 2025-11-11 00:40:51.860023 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group storage:children from 50-kolla 2025-11-11 00:40:51.860034 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group netbird:children from 50-infrastructure 2025-11-11 00:40:51.860044 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group ceph-mds from 50-ceph 2025-11-11 00:40:51.860055 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group ceph-rgw from 50-ceph 2025-11-11 00:40:51.860090 | orchestrator | 2025-11-11 00:40:36 | INFO  | Handling group overwrites in 20-roles 2025-11-11 00:40:51.860101 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removing group k3s_node from 50-infrastructure 2025-11-11 00:40:51.860112 | orchestrator | 2025-11-11 00:40:36 | INFO  | Removed 6 group(s) in total 2025-11-11 00:40:51.860123 | orchestrator | 2025-11-11 00:40:36 | INFO  | Inventory overwrite handling completed 2025-11-11 00:40:51.860133 | orchestrator | 2025-11-11 00:40:37 | INFO  | Starting merge of inventory files 2025-11-11 00:40:51.860144 | orchestrator | 2025-11-11 00:40:37 | INFO  | Inventory files merged successfully 2025-11-11 00:40:51.860155 | orchestrator | 2025-11-11 00:40:41 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-11-11 00:40:51.860165 | orchestrator | 2025-11-11 00:40:50 | INFO  | Successfully wrote ClusterShell configuration 2025-11-11 00:40:51.860176 | orchestrator | [master c3ac266] 2025-11-11-00-40 2025-11-11 00:40:51.860191 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-11-11 00:40:53.882129 | orchestrator | 2025-11-11 00:40:53 | INFO  | Task 3ff0f1a8-c340-4303-ba34-e97f257d3d78 (ceph-create-lvm-devices) was prepared for execution. 2025-11-11 00:40:53.882244 | orchestrator | 2025-11-11 00:40:53 | INFO  | It takes a moment until task 3ff0f1a8-c340-4303-ba34-e97f257d3d78 (ceph-create-lvm-devices) has been started and output is visible here. 2025-11-11 00:41:04.936355 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-11 00:41:04.936478 | orchestrator | 2.16.14 2025-11-11 00:41:04.936497 | orchestrator | 2025-11-11 00:41:04.936510 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-11 00:41:04.936522 | orchestrator | 2025-11-11 00:41:04.936534 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:41:04.936546 | orchestrator | Tuesday 11 November 2025 00:40:58 +0000 (0:00:00.292) 0:00:00.292 ****** 2025-11-11 00:41:04.936557 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-11 00:41:04.936569 | orchestrator | 2025-11-11 00:41:04.936580 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:41:04.936591 | orchestrator | Tuesday 11 November 2025 00:40:58 +0000 (0:00:00.240) 0:00:00.533 ****** 2025-11-11 00:41:04.936602 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:04.936614 | orchestrator | 2025-11-11 00:41:04.936625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.936636 | orchestrator | Tuesday 11 November 2025 00:40:58 +0000 (0:00:00.202) 0:00:00.735 ****** 2025-11-11 00:41:04.936688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-11 00:41:04.936701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-11 00:41:04.936712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-11 00:41:04.936723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-11 00:41:04.936734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-11 00:41:04.936745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-11 00:41:04.936755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-11 00:41:04.936766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-11 00:41:04.936777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-11 00:41:04.936788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-11 00:41:04.936799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-11 00:41:04.936837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-11 00:41:04.936849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-11 00:41:04.936862 | orchestrator | 2025-11-11 00:41:04.936876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.936888 | orchestrator | Tuesday 11 November 2025 00:40:58 +0000 (0:00:00.471) 0:00:01.206 ****** 2025-11-11 00:41:04.936901 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.936914 | orchestrator | 2025-11-11 00:41:04.936926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.936939 | orchestrator | Tuesday 11 November 2025 00:40:59 +0000 (0:00:00.189) 0:00:01.396 ****** 2025-11-11 00:41:04.936951 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.936963 | orchestrator | 2025-11-11 00:41:04.936976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.936988 | orchestrator | Tuesday 11 November 2025 00:40:59 +0000 (0:00:00.201) 0:00:01.598 ****** 2025-11-11 00:41:04.937001 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937013 | orchestrator | 2025-11-11 00:41:04.937026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937040 | orchestrator | Tuesday 11 November 2025 00:40:59 +0000 (0:00:00.189) 0:00:01.788 ****** 2025-11-11 00:41:04.937052 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937064 | orchestrator | 2025-11-11 00:41:04.937077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937089 | orchestrator | Tuesday 11 November 2025 00:40:59 +0000 (0:00:00.199) 0:00:01.987 ****** 2025-11-11 00:41:04.937101 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937114 | orchestrator | 2025-11-11 00:41:04.937126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937139 | orchestrator | Tuesday 11 November 2025 00:40:59 +0000 (0:00:00.191) 0:00:02.179 ****** 2025-11-11 00:41:04.937151 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937164 | orchestrator | 2025-11-11 00:41:04.937176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937189 | orchestrator | Tuesday 11 November 2025 00:41:00 +0000 (0:00:00.192) 0:00:02.371 ****** 2025-11-11 00:41:04.937202 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937214 | orchestrator | 2025-11-11 00:41:04.937225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937236 | orchestrator | Tuesday 11 November 2025 00:41:00 +0000 (0:00:00.188) 0:00:02.560 ****** 2025-11-11 00:41:04.937246 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937257 | orchestrator | 2025-11-11 00:41:04.937268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937279 | orchestrator | Tuesday 11 November 2025 00:41:00 +0000 (0:00:00.198) 0:00:02.758 ****** 2025-11-11 00:41:04.937290 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013) 2025-11-11 00:41:04.937303 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013) 2025-11-11 00:41:04.937313 | orchestrator | 2025-11-11 00:41:04.937325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937352 | orchestrator | Tuesday 11 November 2025 00:41:00 +0000 (0:00:00.405) 0:00:03.164 ****** 2025-11-11 00:41:04.937364 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882) 2025-11-11 00:41:04.937376 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882) 2025-11-11 00:41:04.937386 | orchestrator | 2025-11-11 00:41:04.937397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937408 | orchestrator | Tuesday 11 November 2025 00:41:01 +0000 (0:00:00.578) 0:00:03.742 ****** 2025-11-11 00:41:04.937427 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d) 2025-11-11 00:41:04.937438 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d) 2025-11-11 00:41:04.937448 | orchestrator | 2025-11-11 00:41:04.937459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937470 | orchestrator | Tuesday 11 November 2025 00:41:02 +0000 (0:00:00.621) 0:00:04.364 ****** 2025-11-11 00:41:04.937480 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f) 2025-11-11 00:41:04.937491 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f) 2025-11-11 00:41:04.937502 | orchestrator | 2025-11-11 00:41:04.937513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:04.937524 | orchestrator | Tuesday 11 November 2025 00:41:02 +0000 (0:00:00.804) 0:00:05.168 ****** 2025-11-11 00:41:04.937534 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:41:04.937545 | orchestrator | 2025-11-11 00:41:04.937556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937566 | orchestrator | Tuesday 11 November 2025 00:41:03 +0000 (0:00:00.311) 0:00:05.479 ****** 2025-11-11 00:41:04.937577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-11 00:41:04.937588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-11 00:41:04.937616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-11 00:41:04.937628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-11 00:41:04.937638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-11 00:41:04.937666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-11 00:41:04.937678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-11 00:41:04.937688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-11 00:41:04.937699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-11 00:41:04.937710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-11 00:41:04.937725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-11 00:41:04.937736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-11 00:41:04.937747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-11 00:41:04.937758 | orchestrator | 2025-11-11 00:41:04.937769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937779 | orchestrator | Tuesday 11 November 2025 00:41:03 +0000 (0:00:00.388) 0:00:05.867 ****** 2025-11-11 00:41:04.937790 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937800 | orchestrator | 2025-11-11 00:41:04.937811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937822 | orchestrator | Tuesday 11 November 2025 00:41:03 +0000 (0:00:00.178) 0:00:06.046 ****** 2025-11-11 00:41:04.937833 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937843 | orchestrator | 2025-11-11 00:41:04.937854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937865 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.179) 0:00:06.226 ****** 2025-11-11 00:41:04.937875 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937886 | orchestrator | 2025-11-11 00:41:04.937897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937914 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.180) 0:00:06.406 ****** 2025-11-11 00:41:04.937925 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937936 | orchestrator | 2025-11-11 00:41:04.937946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.937957 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.186) 0:00:06.592 ****** 2025-11-11 00:41:04.937968 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.937978 | orchestrator | 2025-11-11 00:41:04.937989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.938000 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.186) 0:00:06.778 ****** 2025-11-11 00:41:04.938011 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.938075 | orchestrator | 2025-11-11 00:41:04.938087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:04.938097 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.179) 0:00:06.958 ****** 2025-11-11 00:41:04.938108 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:04.938119 | orchestrator | 2025-11-11 00:41:04.938136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.620601 | orchestrator | Tuesday 11 November 2025 00:41:04 +0000 (0:00:00.188) 0:00:07.146 ****** 2025-11-11 00:41:12.620737 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.620755 | orchestrator | 2025-11-11 00:41:12.620767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.620779 | orchestrator | Tuesday 11 November 2025 00:41:05 +0000 (0:00:00.185) 0:00:07.332 ****** 2025-11-11 00:41:12.620790 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-11 00:41:12.620802 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-11 00:41:12.620813 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-11 00:41:12.620824 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-11 00:41:12.620834 | orchestrator | 2025-11-11 00:41:12.620846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.620856 | orchestrator | Tuesday 11 November 2025 00:41:06 +0000 (0:00:00.981) 0:00:08.314 ****** 2025-11-11 00:41:12.620867 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.620878 | orchestrator | 2025-11-11 00:41:12.620889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.620900 | orchestrator | Tuesday 11 November 2025 00:41:06 +0000 (0:00:00.196) 0:00:08.510 ****** 2025-11-11 00:41:12.620911 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.620922 | orchestrator | 2025-11-11 00:41:12.620933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.620944 | orchestrator | Tuesday 11 November 2025 00:41:06 +0000 (0:00:00.192) 0:00:08.702 ****** 2025-11-11 00:41:12.620955 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.620966 | orchestrator | 2025-11-11 00:41:12.620991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:12.621002 | orchestrator | Tuesday 11 November 2025 00:41:06 +0000 (0:00:00.202) 0:00:08.904 ****** 2025-11-11 00:41:12.621023 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621035 | orchestrator | 2025-11-11 00:41:12.621046 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-11 00:41:12.621056 | orchestrator | Tuesday 11 November 2025 00:41:06 +0000 (0:00:00.198) 0:00:09.103 ****** 2025-11-11 00:41:12.621067 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621078 | orchestrator | 2025-11-11 00:41:12.621089 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-11 00:41:12.621100 | orchestrator | Tuesday 11 November 2025 00:41:07 +0000 (0:00:00.125) 0:00:09.229 ****** 2025-11-11 00:41:12.621111 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01811ce3-d07c-5516-bfbb-fba58f4d4962'}}) 2025-11-11 00:41:12.621123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}}) 2025-11-11 00:41:12.621156 | orchestrator | 2025-11-11 00:41:12.621170 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-11 00:41:12.621183 | orchestrator | Tuesday 11 November 2025 00:41:07 +0000 (0:00:00.170) 0:00:09.400 ****** 2025-11-11 00:41:12.621196 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'}) 2025-11-11 00:41:12.621211 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}) 2025-11-11 00:41:12.621223 | orchestrator | 2025-11-11 00:41:12.621235 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-11 00:41:12.621248 | orchestrator | Tuesday 11 November 2025 00:41:09 +0000 (0:00:01.913) 0:00:11.313 ****** 2025-11-11 00:41:12.621260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621286 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621298 | orchestrator | 2025-11-11 00:41:12.621310 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-11 00:41:12.621322 | orchestrator | Tuesday 11 November 2025 00:41:09 +0000 (0:00:00.147) 0:00:11.461 ****** 2025-11-11 00:41:12.621335 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'}) 2025-11-11 00:41:12.621347 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}) 2025-11-11 00:41:12.621360 | orchestrator | 2025-11-11 00:41:12.621372 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-11 00:41:12.621385 | orchestrator | Tuesday 11 November 2025 00:41:10 +0000 (0:00:01.423) 0:00:12.884 ****** 2025-11-11 00:41:12.621397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621422 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621434 | orchestrator | 2025-11-11 00:41:12.621447 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-11 00:41:12.621459 | orchestrator | Tuesday 11 November 2025 00:41:10 +0000 (0:00:00.145) 0:00:13.029 ****** 2025-11-11 00:41:12.621489 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621501 | orchestrator | 2025-11-11 00:41:12.621512 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-11 00:41:12.621523 | orchestrator | Tuesday 11 November 2025 00:41:10 +0000 (0:00:00.141) 0:00:13.171 ****** 2025-11-11 00:41:12.621534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621556 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621566 | orchestrator | 2025-11-11 00:41:12.621577 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-11 00:41:12.621588 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.347) 0:00:13.519 ****** 2025-11-11 00:41:12.621599 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621609 | orchestrator | 2025-11-11 00:41:12.621628 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-11 00:41:12.621639 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.123) 0:00:13.642 ****** 2025-11-11 00:41:12.621671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621693 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621704 | orchestrator | 2025-11-11 00:41:12.621714 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-11 00:41:12.621725 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.150) 0:00:13.793 ****** 2025-11-11 00:41:12.621735 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621746 | orchestrator | 2025-11-11 00:41:12.621757 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-11 00:41:12.621767 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.139) 0:00:13.932 ****** 2025-11-11 00:41:12.621778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621799 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621810 | orchestrator | 2025-11-11 00:41:12.621821 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-11 00:41:12.621831 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.154) 0:00:14.086 ****** 2025-11-11 00:41:12.621859 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:12.621871 | orchestrator | 2025-11-11 00:41:12.621882 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-11 00:41:12.621899 | orchestrator | Tuesday 11 November 2025 00:41:11 +0000 (0:00:00.125) 0:00:14.212 ****** 2025-11-11 00:41:12.621909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621931 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.621942 | orchestrator | 2025-11-11 00:41:12.621952 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-11 00:41:12.621963 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.148) 0:00:14.361 ****** 2025-11-11 00:41:12.621974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.621984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.621995 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.622006 | orchestrator | 2025-11-11 00:41:12.622064 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-11 00:41:12.622078 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.171) 0:00:14.532 ****** 2025-11-11 00:41:12.622089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:12.622100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:12.622111 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.622130 | orchestrator | 2025-11-11 00:41:12.622140 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-11 00:41:12.622151 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.157) 0:00:14.690 ****** 2025-11-11 00:41:12.622162 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:12.622173 | orchestrator | 2025-11-11 00:41:12.622184 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-11 00:41:12.622202 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.137) 0:00:14.828 ****** 2025-11-11 00:41:18.983242 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.983360 | orchestrator | 2025-11-11 00:41:18.983376 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-11 00:41:18.983390 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.125) 0:00:14.953 ****** 2025-11-11 00:41:18.983402 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.983413 | orchestrator | 2025-11-11 00:41:18.983424 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-11 00:41:18.983436 | orchestrator | Tuesday 11 November 2025 00:41:12 +0000 (0:00:00.126) 0:00:15.080 ****** 2025-11-11 00:41:18.983446 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:41:18.983458 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-11 00:41:18.983470 | orchestrator | } 2025-11-11 00:41:18.983481 | orchestrator | 2025-11-11 00:41:18.983492 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-11 00:41:18.983503 | orchestrator | Tuesday 11 November 2025 00:41:13 +0000 (0:00:00.317) 0:00:15.397 ****** 2025-11-11 00:41:18.983513 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:41:18.983524 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-11 00:41:18.983535 | orchestrator | } 2025-11-11 00:41:18.983546 | orchestrator | 2025-11-11 00:41:18.983556 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-11 00:41:18.983568 | orchestrator | Tuesday 11 November 2025 00:41:13 +0000 (0:00:00.146) 0:00:15.544 ****** 2025-11-11 00:41:18.983579 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:41:18.983591 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-11 00:41:18.983602 | orchestrator | } 2025-11-11 00:41:18.983613 | orchestrator | 2025-11-11 00:41:18.983624 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-11 00:41:18.983634 | orchestrator | Tuesday 11 November 2025 00:41:13 +0000 (0:00:00.154) 0:00:15.698 ****** 2025-11-11 00:41:18.983681 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:18.983694 | orchestrator | 2025-11-11 00:41:18.983704 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-11 00:41:18.983715 | orchestrator | Tuesday 11 November 2025 00:41:14 +0000 (0:00:00.672) 0:00:16.370 ****** 2025-11-11 00:41:18.983726 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:18.983737 | orchestrator | 2025-11-11 00:41:18.983748 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-11 00:41:18.983759 | orchestrator | Tuesday 11 November 2025 00:41:14 +0000 (0:00:00.538) 0:00:16.909 ****** 2025-11-11 00:41:18.983772 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:18.983785 | orchestrator | 2025-11-11 00:41:18.983797 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-11 00:41:18.983810 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.496) 0:00:17.406 ****** 2025-11-11 00:41:18.983822 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:18.983834 | orchestrator | 2025-11-11 00:41:18.983846 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-11 00:41:18.983859 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.141) 0:00:17.547 ****** 2025-11-11 00:41:18.983871 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.983883 | orchestrator | 2025-11-11 00:41:18.983896 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-11 00:41:18.983908 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.101) 0:00:17.648 ****** 2025-11-11 00:41:18.983943 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.983956 | orchestrator | 2025-11-11 00:41:18.983984 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-11 00:41:18.983996 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.106) 0:00:17.754 ****** 2025-11-11 00:41:18.984008 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:41:18.984021 | orchestrator |  "vgs_report": { 2025-11-11 00:41:18.984033 | orchestrator |  "vg": [] 2025-11-11 00:41:18.984046 | orchestrator |  } 2025-11-11 00:41:18.984058 | orchestrator | } 2025-11-11 00:41:18.984070 | orchestrator | 2025-11-11 00:41:18.984082 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-11 00:41:18.984094 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.152) 0:00:17.907 ****** 2025-11-11 00:41:18.984106 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984118 | orchestrator | 2025-11-11 00:41:18.984129 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-11 00:41:18.984140 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.132) 0:00:18.039 ****** 2025-11-11 00:41:18.984150 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984161 | orchestrator | 2025-11-11 00:41:18.984172 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-11 00:41:18.984183 | orchestrator | Tuesday 11 November 2025 00:41:15 +0000 (0:00:00.136) 0:00:18.175 ****** 2025-11-11 00:41:18.984193 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984204 | orchestrator | 2025-11-11 00:41:18.984215 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-11 00:41:18.984226 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.303) 0:00:18.479 ****** 2025-11-11 00:41:18.984236 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984247 | orchestrator | 2025-11-11 00:41:18.984258 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-11 00:41:18.984269 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.138) 0:00:18.618 ****** 2025-11-11 00:41:18.984280 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984290 | orchestrator | 2025-11-11 00:41:18.984301 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-11 00:41:18.984312 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.136) 0:00:18.754 ****** 2025-11-11 00:41:18.984322 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984333 | orchestrator | 2025-11-11 00:41:18.984344 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-11 00:41:18.984355 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.135) 0:00:18.890 ****** 2025-11-11 00:41:18.984365 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984376 | orchestrator | 2025-11-11 00:41:18.984386 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-11 00:41:18.984397 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.139) 0:00:19.030 ****** 2025-11-11 00:41:18.984425 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984437 | orchestrator | 2025-11-11 00:41:18.984448 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-11 00:41:18.984458 | orchestrator | Tuesday 11 November 2025 00:41:16 +0000 (0:00:00.133) 0:00:19.163 ****** 2025-11-11 00:41:18.984469 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984479 | orchestrator | 2025-11-11 00:41:18.984490 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-11 00:41:18.984501 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.143) 0:00:19.307 ****** 2025-11-11 00:41:18.984511 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984522 | orchestrator | 2025-11-11 00:41:18.984533 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-11 00:41:18.984543 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.130) 0:00:19.438 ****** 2025-11-11 00:41:18.984554 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984564 | orchestrator | 2025-11-11 00:41:18.984584 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-11 00:41:18.984594 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.131) 0:00:19.569 ****** 2025-11-11 00:41:18.984605 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984616 | orchestrator | 2025-11-11 00:41:18.984626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-11 00:41:18.984637 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.135) 0:00:19.705 ****** 2025-11-11 00:41:18.984665 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984676 | orchestrator | 2025-11-11 00:41:18.984687 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-11 00:41:18.984698 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.141) 0:00:19.846 ****** 2025-11-11 00:41:18.984708 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984719 | orchestrator | 2025-11-11 00:41:18.984729 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-11 00:41:18.984740 | orchestrator | Tuesday 11 November 2025 00:41:17 +0000 (0:00:00.134) 0:00:19.981 ****** 2025-11-11 00:41:18.984753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:18.984765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:18.984776 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984787 | orchestrator | 2025-11-11 00:41:18.984798 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-11 00:41:18.984808 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.425) 0:00:20.406 ****** 2025-11-11 00:41:18.984819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:18.984830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:18.984841 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984852 | orchestrator | 2025-11-11 00:41:18.984863 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-11 00:41:18.984873 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.169) 0:00:20.575 ****** 2025-11-11 00:41:18.984884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:18.984895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:18.984906 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984917 | orchestrator | 2025-11-11 00:41:18.984927 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-11 00:41:18.984938 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.145) 0:00:20.721 ****** 2025-11-11 00:41:18.984949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:18.984960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:18.984971 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.984981 | orchestrator | 2025-11-11 00:41:18.984992 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-11 00:41:18.985003 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.157) 0:00:20.879 ****** 2025-11-11 00:41:18.985014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:18.985039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:18.985051 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:18.985062 | orchestrator | 2025-11-11 00:41:18.985073 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-11 00:41:18.985083 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.157) 0:00:21.036 ****** 2025-11-11 00:41:18.985101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.996973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997086 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997102 | orchestrator | 2025-11-11 00:41:23.997114 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-11 00:41:23.997127 | orchestrator | Tuesday 11 November 2025 00:41:18 +0000 (0:00:00.160) 0:00:21.197 ****** 2025-11-11 00:41:23.997139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.997150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997161 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997172 | orchestrator | 2025-11-11 00:41:23.997183 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-11 00:41:23.997194 | orchestrator | Tuesday 11 November 2025 00:41:19 +0000 (0:00:00.160) 0:00:21.358 ****** 2025-11-11 00:41:23.997205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.997216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997227 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997237 | orchestrator | 2025-11-11 00:41:23.997248 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-11 00:41:23.997259 | orchestrator | Tuesday 11 November 2025 00:41:19 +0000 (0:00:00.149) 0:00:21.508 ****** 2025-11-11 00:41:23.997270 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:23.997282 | orchestrator | 2025-11-11 00:41:23.997292 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-11 00:41:23.997303 | orchestrator | Tuesday 11 November 2025 00:41:19 +0000 (0:00:00.510) 0:00:22.018 ****** 2025-11-11 00:41:23.997314 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:23.997324 | orchestrator | 2025-11-11 00:41:23.997335 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-11 00:41:23.997346 | orchestrator | Tuesday 11 November 2025 00:41:20 +0000 (0:00:00.505) 0:00:22.524 ****** 2025-11-11 00:41:23.997357 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:41:23.997367 | orchestrator | 2025-11-11 00:41:23.997378 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-11 00:41:23.997389 | orchestrator | Tuesday 11 November 2025 00:41:20 +0000 (0:00:00.143) 0:00:22.668 ****** 2025-11-11 00:41:23.997417 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'vg_name': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'}) 2025-11-11 00:41:23.997429 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'vg_name': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}) 2025-11-11 00:41:23.997440 | orchestrator | 2025-11-11 00:41:23.997451 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-11 00:41:23.997484 | orchestrator | Tuesday 11 November 2025 00:41:20 +0000 (0:00:00.173) 0:00:22.842 ****** 2025-11-11 00:41:23.997496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.997507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997518 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997528 | orchestrator | 2025-11-11 00:41:23.997539 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-11 00:41:23.997550 | orchestrator | Tuesday 11 November 2025 00:41:20 +0000 (0:00:00.342) 0:00:23.184 ****** 2025-11-11 00:41:23.997561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.997571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997583 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997594 | orchestrator | 2025-11-11 00:41:23.997605 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-11 00:41:23.997615 | orchestrator | Tuesday 11 November 2025 00:41:21 +0000 (0:00:00.146) 0:00:23.330 ****** 2025-11-11 00:41:23.997626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'})  2025-11-11 00:41:23.997637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'})  2025-11-11 00:41:23.997672 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:41:23.997684 | orchestrator | 2025-11-11 00:41:23.997694 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-11 00:41:23.997705 | orchestrator | Tuesday 11 November 2025 00:41:21 +0000 (0:00:00.158) 0:00:23.489 ****** 2025-11-11 00:41:23.997734 | orchestrator | ok: [testbed-node-3] => { 2025-11-11 00:41:23.997746 | orchestrator |  "lvm_report": { 2025-11-11 00:41:23.997757 | orchestrator |  "lv": [ 2025-11-11 00:41:23.997768 | orchestrator |  { 2025-11-11 00:41:23.997779 | orchestrator |  "lv_name": "osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962", 2025-11-11 00:41:23.997790 | orchestrator |  "vg_name": "ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962" 2025-11-11 00:41:23.997801 | orchestrator |  }, 2025-11-11 00:41:23.997812 | orchestrator |  { 2025-11-11 00:41:23.997822 | orchestrator |  "lv_name": "osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2", 2025-11-11 00:41:23.997833 | orchestrator |  "vg_name": "ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2" 2025-11-11 00:41:23.997844 | orchestrator |  } 2025-11-11 00:41:23.997855 | orchestrator |  ], 2025-11-11 00:41:23.997866 | orchestrator |  "pv": [ 2025-11-11 00:41:23.997876 | orchestrator |  { 2025-11-11 00:41:23.997887 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-11 00:41:23.997898 | orchestrator |  "vg_name": "ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962" 2025-11-11 00:41:23.997909 | orchestrator |  }, 2025-11-11 00:41:23.997919 | orchestrator |  { 2025-11-11 00:41:23.997930 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-11 00:41:23.997941 | orchestrator |  "vg_name": "ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2" 2025-11-11 00:41:23.997952 | orchestrator |  } 2025-11-11 00:41:23.997962 | orchestrator |  ] 2025-11-11 00:41:23.997973 | orchestrator |  } 2025-11-11 00:41:23.997984 | orchestrator | } 2025-11-11 00:41:23.997995 | orchestrator | 2025-11-11 00:41:23.998006 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-11 00:41:23.998074 | orchestrator | 2025-11-11 00:41:23.998089 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:41:23.998100 | orchestrator | Tuesday 11 November 2025 00:41:21 +0000 (0:00:00.277) 0:00:23.766 ****** 2025-11-11 00:41:23.998111 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-11 00:41:23.998122 | orchestrator | 2025-11-11 00:41:23.998133 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:41:23.998144 | orchestrator | Tuesday 11 November 2025 00:41:21 +0000 (0:00:00.224) 0:00:23.990 ****** 2025-11-11 00:41:23.998155 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:23.998166 | orchestrator | 2025-11-11 00:41:23.998177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998187 | orchestrator | Tuesday 11 November 2025 00:41:21 +0000 (0:00:00.221) 0:00:24.212 ****** 2025-11-11 00:41:23.998198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-11 00:41:23.998209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-11 00:41:23.998220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-11 00:41:23.998231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-11 00:41:23.998241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-11 00:41:23.998258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-11 00:41:23.998269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-11 00:41:23.998280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-11 00:41:23.998291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-11 00:41:23.998302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-11 00:41:23.998313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-11 00:41:23.998323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-11 00:41:23.998334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-11 00:41:23.998345 | orchestrator | 2025-11-11 00:41:23.998355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998366 | orchestrator | Tuesday 11 November 2025 00:41:22 +0000 (0:00:00.430) 0:00:24.643 ****** 2025-11-11 00:41:23.998377 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998388 | orchestrator | 2025-11-11 00:41:23.998399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998410 | orchestrator | Tuesday 11 November 2025 00:41:22 +0000 (0:00:00.210) 0:00:24.853 ****** 2025-11-11 00:41:23.998420 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998431 | orchestrator | 2025-11-11 00:41:23.998442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998452 | orchestrator | Tuesday 11 November 2025 00:41:22 +0000 (0:00:00.186) 0:00:25.040 ****** 2025-11-11 00:41:23.998463 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998474 | orchestrator | 2025-11-11 00:41:23.998485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998495 | orchestrator | Tuesday 11 November 2025 00:41:23 +0000 (0:00:00.573) 0:00:25.613 ****** 2025-11-11 00:41:23.998506 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998517 | orchestrator | 2025-11-11 00:41:23.998528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998538 | orchestrator | Tuesday 11 November 2025 00:41:23 +0000 (0:00:00.209) 0:00:25.823 ****** 2025-11-11 00:41:23.998549 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998560 | orchestrator | 2025-11-11 00:41:23.998578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:23.998589 | orchestrator | Tuesday 11 November 2025 00:41:23 +0000 (0:00:00.194) 0:00:26.017 ****** 2025-11-11 00:41:23.998600 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:23.998611 | orchestrator | 2025-11-11 00:41:23.998629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459119 | orchestrator | Tuesday 11 November 2025 00:41:23 +0000 (0:00:00.190) 0:00:26.208 ****** 2025-11-11 00:41:34.459232 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.459247 | orchestrator | 2025-11-11 00:41:34.459260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459272 | orchestrator | Tuesday 11 November 2025 00:41:24 +0000 (0:00:00.191) 0:00:26.400 ****** 2025-11-11 00:41:34.459283 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.459294 | orchestrator | 2025-11-11 00:41:34.459305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459316 | orchestrator | Tuesday 11 November 2025 00:41:24 +0000 (0:00:00.185) 0:00:26.585 ****** 2025-11-11 00:41:34.459327 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5) 2025-11-11 00:41:34.459339 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5) 2025-11-11 00:41:34.459350 | orchestrator | 2025-11-11 00:41:34.459361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459372 | orchestrator | Tuesday 11 November 2025 00:41:24 +0000 (0:00:00.417) 0:00:27.003 ****** 2025-11-11 00:41:34.459382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34) 2025-11-11 00:41:34.459394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34) 2025-11-11 00:41:34.459404 | orchestrator | 2025-11-11 00:41:34.459415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459426 | orchestrator | Tuesday 11 November 2025 00:41:25 +0000 (0:00:00.395) 0:00:27.398 ****** 2025-11-11 00:41:34.459437 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3) 2025-11-11 00:41:34.459448 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3) 2025-11-11 00:41:34.459459 | orchestrator | 2025-11-11 00:41:34.459469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459480 | orchestrator | Tuesday 11 November 2025 00:41:25 +0000 (0:00:00.414) 0:00:27.812 ****** 2025-11-11 00:41:34.459491 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860) 2025-11-11 00:41:34.459519 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860) 2025-11-11 00:41:34.459541 | orchestrator | 2025-11-11 00:41:34.459553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:34.459564 | orchestrator | Tuesday 11 November 2025 00:41:26 +0000 (0:00:00.629) 0:00:28.442 ****** 2025-11-11 00:41:34.459574 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:41:34.459585 | orchestrator | 2025-11-11 00:41:34.459596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.459607 | orchestrator | Tuesday 11 November 2025 00:41:26 +0000 (0:00:00.517) 0:00:28.960 ****** 2025-11-11 00:41:34.459619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-11 00:41:34.459632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-11 00:41:34.459664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-11 00:41:34.459699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-11 00:41:34.459733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-11 00:41:34.459744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-11 00:41:34.459755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-11 00:41:34.459766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-11 00:41:34.459777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-11 00:41:34.459787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-11 00:41:34.459798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-11 00:41:34.459809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-11 00:41:34.459819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-11 00:41:34.459830 | orchestrator | 2025-11-11 00:41:34.459840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.459851 | orchestrator | Tuesday 11 November 2025 00:41:27 +0000 (0:00:00.807) 0:00:29.767 ****** 2025-11-11 00:41:34.459862 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.459873 | orchestrator | 2025-11-11 00:41:34.459884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.459895 | orchestrator | Tuesday 11 November 2025 00:41:27 +0000 (0:00:00.172) 0:00:29.940 ****** 2025-11-11 00:41:34.459906 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.459916 | orchestrator | 2025-11-11 00:41:34.459927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.459938 | orchestrator | Tuesday 11 November 2025 00:41:27 +0000 (0:00:00.194) 0:00:30.134 ****** 2025-11-11 00:41:34.459948 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.459959 | orchestrator | 2025-11-11 00:41:34.459988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460000 | orchestrator | Tuesday 11 November 2025 00:41:28 +0000 (0:00:00.180) 0:00:30.314 ****** 2025-11-11 00:41:34.460011 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460022 | orchestrator | 2025-11-11 00:41:34.460032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460043 | orchestrator | Tuesday 11 November 2025 00:41:28 +0000 (0:00:00.195) 0:00:30.510 ****** 2025-11-11 00:41:34.460054 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460065 | orchestrator | 2025-11-11 00:41:34.460075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460086 | orchestrator | Tuesday 11 November 2025 00:41:28 +0000 (0:00:00.189) 0:00:30.699 ****** 2025-11-11 00:41:34.460096 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460107 | orchestrator | 2025-11-11 00:41:34.460118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460128 | orchestrator | Tuesday 11 November 2025 00:41:28 +0000 (0:00:00.194) 0:00:30.894 ****** 2025-11-11 00:41:34.460139 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460149 | orchestrator | 2025-11-11 00:41:34.460160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460171 | orchestrator | Tuesday 11 November 2025 00:41:28 +0000 (0:00:00.190) 0:00:31.084 ****** 2025-11-11 00:41:34.460181 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460192 | orchestrator | 2025-11-11 00:41:34.460203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460213 | orchestrator | Tuesday 11 November 2025 00:41:29 +0000 (0:00:00.188) 0:00:31.273 ****** 2025-11-11 00:41:34.460224 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-11 00:41:34.460235 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-11 00:41:34.460246 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-11 00:41:34.460266 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-11 00:41:34.460277 | orchestrator | 2025-11-11 00:41:34.460287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460298 | orchestrator | Tuesday 11 November 2025 00:41:29 +0000 (0:00:00.797) 0:00:32.071 ****** 2025-11-11 00:41:34.460309 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460319 | orchestrator | 2025-11-11 00:41:34.460330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460340 | orchestrator | Tuesday 11 November 2025 00:41:30 +0000 (0:00:00.185) 0:00:32.257 ****** 2025-11-11 00:41:34.460351 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460362 | orchestrator | 2025-11-11 00:41:34.460372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460383 | orchestrator | Tuesday 11 November 2025 00:41:30 +0000 (0:00:00.580) 0:00:32.837 ****** 2025-11-11 00:41:34.460393 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460404 | orchestrator | 2025-11-11 00:41:34.460415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:34.460425 | orchestrator | Tuesday 11 November 2025 00:41:30 +0000 (0:00:00.193) 0:00:33.031 ****** 2025-11-11 00:41:34.460441 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460452 | orchestrator | 2025-11-11 00:41:34.460463 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-11 00:41:34.460473 | orchestrator | Tuesday 11 November 2025 00:41:30 +0000 (0:00:00.186) 0:00:33.218 ****** 2025-11-11 00:41:34.460484 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460494 | orchestrator | 2025-11-11 00:41:34.460505 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-11 00:41:34.460515 | orchestrator | Tuesday 11 November 2025 00:41:31 +0000 (0:00:00.137) 0:00:33.355 ****** 2025-11-11 00:41:34.460526 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}}) 2025-11-11 00:41:34.460537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fda84b1-4127-5701-96e6-fb2774ba2cbf'}}) 2025-11-11 00:41:34.460548 | orchestrator | 2025-11-11 00:41:34.460558 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-11 00:41:34.460569 | orchestrator | Tuesday 11 November 2025 00:41:31 +0000 (0:00:00.183) 0:00:33.539 ****** 2025-11-11 00:41:34.460581 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}) 2025-11-11 00:41:34.460594 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'}) 2025-11-11 00:41:34.460604 | orchestrator | 2025-11-11 00:41:34.460615 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-11 00:41:34.460626 | orchestrator | Tuesday 11 November 2025 00:41:33 +0000 (0:00:01.733) 0:00:35.272 ****** 2025-11-11 00:41:34.460636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:34.460667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:34.460679 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:34.460689 | orchestrator | 2025-11-11 00:41:34.460700 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-11 00:41:34.460711 | orchestrator | Tuesday 11 November 2025 00:41:33 +0000 (0:00:00.143) 0:00:35.415 ****** 2025-11-11 00:41:34.460721 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}) 2025-11-11 00:41:34.460739 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'}) 2025-11-11 00:41:39.661450 | orchestrator | 2025-11-11 00:41:39.661568 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-11 00:41:39.661586 | orchestrator | Tuesday 11 November 2025 00:41:34 +0000 (0:00:01.251) 0:00:36.667 ****** 2025-11-11 00:41:39.661599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.661612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.661623 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661635 | orchestrator | 2025-11-11 00:41:39.661671 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-11 00:41:39.661683 | orchestrator | Tuesday 11 November 2025 00:41:34 +0000 (0:00:00.136) 0:00:36.804 ****** 2025-11-11 00:41:39.661694 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661705 | orchestrator | 2025-11-11 00:41:39.661716 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-11 00:41:39.661727 | orchestrator | Tuesday 11 November 2025 00:41:34 +0000 (0:00:00.121) 0:00:36.925 ****** 2025-11-11 00:41:39.661738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.661749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.661760 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661770 | orchestrator | 2025-11-11 00:41:39.661781 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-11 00:41:39.661792 | orchestrator | Tuesday 11 November 2025 00:41:34 +0000 (0:00:00.146) 0:00:37.072 ****** 2025-11-11 00:41:39.661802 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661813 | orchestrator | 2025-11-11 00:41:39.661824 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-11 00:41:39.661834 | orchestrator | Tuesday 11 November 2025 00:41:34 +0000 (0:00:00.147) 0:00:37.219 ****** 2025-11-11 00:41:39.661845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.661856 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.661884 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661895 | orchestrator | 2025-11-11 00:41:39.661906 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-11 00:41:39.661917 | orchestrator | Tuesday 11 November 2025 00:41:35 +0000 (0:00:00.322) 0:00:37.542 ****** 2025-11-11 00:41:39.661927 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.661938 | orchestrator | 2025-11-11 00:41:39.661948 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-11 00:41:39.661959 | orchestrator | Tuesday 11 November 2025 00:41:35 +0000 (0:00:00.153) 0:00:37.696 ****** 2025-11-11 00:41:39.661970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.661982 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.661995 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662007 | orchestrator | 2025-11-11 00:41:39.662077 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-11 00:41:39.662093 | orchestrator | Tuesday 11 November 2025 00:41:35 +0000 (0:00:00.135) 0:00:37.831 ****** 2025-11-11 00:41:39.662127 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:39.662142 | orchestrator | 2025-11-11 00:41:39.662155 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-11 00:41:39.662167 | orchestrator | Tuesday 11 November 2025 00:41:35 +0000 (0:00:00.123) 0:00:37.955 ****** 2025-11-11 00:41:39.662180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.662193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.662204 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662217 | orchestrator | 2025-11-11 00:41:39.662228 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-11 00:41:39.662241 | orchestrator | Tuesday 11 November 2025 00:41:35 +0000 (0:00:00.144) 0:00:38.099 ****** 2025-11-11 00:41:39.662253 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.662265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.662277 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662289 | orchestrator | 2025-11-11 00:41:39.662300 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-11 00:41:39.662331 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.136) 0:00:38.236 ****** 2025-11-11 00:41:39.662343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:39.662353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:39.662364 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662375 | orchestrator | 2025-11-11 00:41:39.662385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-11 00:41:39.662396 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.146) 0:00:38.382 ****** 2025-11-11 00:41:39.662406 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662417 | orchestrator | 2025-11-11 00:41:39.662428 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-11 00:41:39.662438 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.131) 0:00:38.514 ****** 2025-11-11 00:41:39.662449 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662459 | orchestrator | 2025-11-11 00:41:39.662470 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-11 00:41:39.662480 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.131) 0:00:38.645 ****** 2025-11-11 00:41:39.662490 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662501 | orchestrator | 2025-11-11 00:41:39.662512 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-11 00:41:39.662522 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.133) 0:00:38.779 ****** 2025-11-11 00:41:39.662533 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:41:39.662544 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-11 00:41:39.662554 | orchestrator | } 2025-11-11 00:41:39.662565 | orchestrator | 2025-11-11 00:41:39.662576 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-11 00:41:39.662586 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.142) 0:00:38.921 ****** 2025-11-11 00:41:39.662597 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:41:39.662607 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-11 00:41:39.662618 | orchestrator | } 2025-11-11 00:41:39.662628 | orchestrator | 2025-11-11 00:41:39.662682 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-11 00:41:39.662694 | orchestrator | Tuesday 11 November 2025 00:41:36 +0000 (0:00:00.135) 0:00:39.057 ****** 2025-11-11 00:41:39.662704 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:41:39.662715 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-11 00:41:39.662725 | orchestrator | } 2025-11-11 00:41:39.662736 | orchestrator | 2025-11-11 00:41:39.662747 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-11 00:41:39.662757 | orchestrator | Tuesday 11 November 2025 00:41:37 +0000 (0:00:00.306) 0:00:39.363 ****** 2025-11-11 00:41:39.662768 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:39.662779 | orchestrator | 2025-11-11 00:41:39.662789 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-11 00:41:39.662800 | orchestrator | Tuesday 11 November 2025 00:41:37 +0000 (0:00:00.504) 0:00:39.868 ****** 2025-11-11 00:41:39.662811 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:39.662821 | orchestrator | 2025-11-11 00:41:39.662832 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-11 00:41:39.662842 | orchestrator | Tuesday 11 November 2025 00:41:38 +0000 (0:00:00.509) 0:00:40.378 ****** 2025-11-11 00:41:39.662853 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:39.662863 | orchestrator | 2025-11-11 00:41:39.662874 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-11 00:41:39.662884 | orchestrator | Tuesday 11 November 2025 00:41:38 +0000 (0:00:00.477) 0:00:40.856 ****** 2025-11-11 00:41:39.662894 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:39.662905 | orchestrator | 2025-11-11 00:41:39.662924 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-11 00:41:39.662935 | orchestrator | Tuesday 11 November 2025 00:41:38 +0000 (0:00:00.146) 0:00:41.002 ****** 2025-11-11 00:41:39.662945 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662956 | orchestrator | 2025-11-11 00:41:39.662966 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-11 00:41:39.662977 | orchestrator | Tuesday 11 November 2025 00:41:38 +0000 (0:00:00.108) 0:00:41.110 ****** 2025-11-11 00:41:39.662987 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.662998 | orchestrator | 2025-11-11 00:41:39.663008 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-11 00:41:39.663018 | orchestrator | Tuesday 11 November 2025 00:41:38 +0000 (0:00:00.106) 0:00:41.217 ****** 2025-11-11 00:41:39.663029 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:41:39.663039 | orchestrator |  "vgs_report": { 2025-11-11 00:41:39.663050 | orchestrator |  "vg": [] 2025-11-11 00:41:39.663061 | orchestrator |  } 2025-11-11 00:41:39.663071 | orchestrator | } 2025-11-11 00:41:39.663082 | orchestrator | 2025-11-11 00:41:39.663092 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-11 00:41:39.663103 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.135) 0:00:41.353 ****** 2025-11-11 00:41:39.663113 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.663127 | orchestrator | 2025-11-11 00:41:39.663146 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-11 00:41:39.663165 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.126) 0:00:41.479 ****** 2025-11-11 00:41:39.663183 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.663202 | orchestrator | 2025-11-11 00:41:39.663219 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-11 00:41:39.663230 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.132) 0:00:41.612 ****** 2025-11-11 00:41:39.663241 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.663252 | orchestrator | 2025-11-11 00:41:39.663262 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-11 00:41:39.663273 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.135) 0:00:41.747 ****** 2025-11-11 00:41:39.663283 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:39.663294 | orchestrator | 2025-11-11 00:41:39.663321 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-11 00:41:44.135921 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.128) 0:00:41.876 ****** 2025-11-11 00:41:44.136047 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136063 | orchestrator | 2025-11-11 00:41:44.136076 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-11 00:41:44.136088 | orchestrator | Tuesday 11 November 2025 00:41:39 +0000 (0:00:00.316) 0:00:42.192 ****** 2025-11-11 00:41:44.136100 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136111 | orchestrator | 2025-11-11 00:41:44.136123 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-11 00:41:44.136134 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.142) 0:00:42.334 ****** 2025-11-11 00:41:44.136146 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136156 | orchestrator | 2025-11-11 00:41:44.136168 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-11 00:41:44.136179 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.126) 0:00:42.461 ****** 2025-11-11 00:41:44.136190 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136201 | orchestrator | 2025-11-11 00:41:44.136212 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-11 00:41:44.136223 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.124) 0:00:42.585 ****** 2025-11-11 00:41:44.136235 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136246 | orchestrator | 2025-11-11 00:41:44.136257 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-11 00:41:44.136268 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.128) 0:00:42.714 ****** 2025-11-11 00:41:44.136279 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136290 | orchestrator | 2025-11-11 00:41:44.136301 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-11 00:41:44.136312 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.159) 0:00:42.874 ****** 2025-11-11 00:41:44.136324 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136334 | orchestrator | 2025-11-11 00:41:44.136346 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-11 00:41:44.136357 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.122) 0:00:42.997 ****** 2025-11-11 00:41:44.136368 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136379 | orchestrator | 2025-11-11 00:41:44.136390 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-11 00:41:44.136401 | orchestrator | Tuesday 11 November 2025 00:41:40 +0000 (0:00:00.133) 0:00:43.130 ****** 2025-11-11 00:41:44.136412 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136423 | orchestrator | 2025-11-11 00:41:44.136435 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-11 00:41:44.136446 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.133) 0:00:43.263 ****** 2025-11-11 00:41:44.136473 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136485 | orchestrator | 2025-11-11 00:41:44.136496 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-11 00:41:44.136507 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.130) 0:00:43.394 ****** 2025-11-11 00:41:44.136520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136544 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136555 | orchestrator | 2025-11-11 00:41:44.136567 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-11 00:41:44.136578 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.145) 0:00:43.540 ****** 2025-11-11 00:41:44.136610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136633 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136701 | orchestrator | 2025-11-11 00:41:44.136713 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-11 00:41:44.136724 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.142) 0:00:43.682 ****** 2025-11-11 00:41:44.136735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136756 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136767 | orchestrator | 2025-11-11 00:41:44.136777 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-11 00:41:44.136788 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.314) 0:00:43.997 ****** 2025-11-11 00:41:44.136799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136820 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136831 | orchestrator | 2025-11-11 00:41:44.136859 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-11 00:41:44.136870 | orchestrator | Tuesday 11 November 2025 00:41:41 +0000 (0:00:00.139) 0:00:44.136 ****** 2025-11-11 00:41:44.136881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136902 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136913 | orchestrator | 2025-11-11 00:41:44.136924 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-11 00:41:44.136935 | orchestrator | Tuesday 11 November 2025 00:41:42 +0000 (0:00:00.143) 0:00:44.280 ****** 2025-11-11 00:41:44.136946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.136957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.136968 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.136978 | orchestrator | 2025-11-11 00:41:44.136989 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-11 00:41:44.137000 | orchestrator | Tuesday 11 November 2025 00:41:42 +0000 (0:00:00.149) 0:00:44.429 ****** 2025-11-11 00:41:44.137010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.137021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.137032 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.137043 | orchestrator | 2025-11-11 00:41:44.137053 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-11 00:41:44.137073 | orchestrator | Tuesday 11 November 2025 00:41:42 +0000 (0:00:00.144) 0:00:44.574 ****** 2025-11-11 00:41:44.137084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.137100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.137111 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.137122 | orchestrator | 2025-11-11 00:41:44.137133 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-11 00:41:44.137143 | orchestrator | Tuesday 11 November 2025 00:41:42 +0000 (0:00:00.161) 0:00:44.735 ****** 2025-11-11 00:41:44.137154 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:44.137165 | orchestrator | 2025-11-11 00:41:44.137175 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-11 00:41:44.137186 | orchestrator | Tuesday 11 November 2025 00:41:43 +0000 (0:00:00.526) 0:00:45.262 ****** 2025-11-11 00:41:44.137197 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:44.137207 | orchestrator | 2025-11-11 00:41:44.137218 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-11 00:41:44.137228 | orchestrator | Tuesday 11 November 2025 00:41:43 +0000 (0:00:00.487) 0:00:45.749 ****** 2025-11-11 00:41:44.137239 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:41:44.137250 | orchestrator | 2025-11-11 00:41:44.137260 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-11 00:41:44.137271 | orchestrator | Tuesday 11 November 2025 00:41:43 +0000 (0:00:00.138) 0:00:45.888 ****** 2025-11-11 00:41:44.137282 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'vg_name': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}) 2025-11-11 00:41:44.137295 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'vg_name': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'}) 2025-11-11 00:41:44.137306 | orchestrator | 2025-11-11 00:41:44.137316 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-11 00:41:44.137327 | orchestrator | Tuesday 11 November 2025 00:41:43 +0000 (0:00:00.164) 0:00:46.053 ****** 2025-11-11 00:41:44.137338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.137349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:44.137360 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:44.137370 | orchestrator | 2025-11-11 00:41:44.137381 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-11 00:41:44.137392 | orchestrator | Tuesday 11 November 2025 00:41:43 +0000 (0:00:00.149) 0:00:46.203 ****** 2025-11-11 00:41:44.137402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:44.137419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:49.868508 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:49.868631 | orchestrator | 2025-11-11 00:41:49.868706 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-11 00:41:49.868720 | orchestrator | Tuesday 11 November 2025 00:41:44 +0000 (0:00:00.148) 0:00:46.351 ****** 2025-11-11 00:41:49.868732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'})  2025-11-11 00:41:49.868745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'})  2025-11-11 00:41:49.868781 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:41:49.868793 | orchestrator | 2025-11-11 00:41:49.868804 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-11 00:41:49.868815 | orchestrator | Tuesday 11 November 2025 00:41:44 +0000 (0:00:00.147) 0:00:46.499 ****** 2025-11-11 00:41:49.868826 | orchestrator | ok: [testbed-node-4] => { 2025-11-11 00:41:49.868837 | orchestrator |  "lvm_report": { 2025-11-11 00:41:49.868849 | orchestrator |  "lv": [ 2025-11-11 00:41:49.868859 | orchestrator |  { 2025-11-11 00:41:49.868871 | orchestrator |  "lv_name": "osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8", 2025-11-11 00:41:49.868883 | orchestrator |  "vg_name": "ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8" 2025-11-11 00:41:49.868893 | orchestrator |  }, 2025-11-11 00:41:49.868904 | orchestrator |  { 2025-11-11 00:41:49.868915 | orchestrator |  "lv_name": "osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf", 2025-11-11 00:41:49.868926 | orchestrator |  "vg_name": "ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf" 2025-11-11 00:41:49.868936 | orchestrator |  } 2025-11-11 00:41:49.868947 | orchestrator |  ], 2025-11-11 00:41:49.868958 | orchestrator |  "pv": [ 2025-11-11 00:41:49.868968 | orchestrator |  { 2025-11-11 00:41:49.868979 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-11 00:41:49.868990 | orchestrator |  "vg_name": "ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8" 2025-11-11 00:41:49.869002 | orchestrator |  }, 2025-11-11 00:41:49.869014 | orchestrator |  { 2025-11-11 00:41:49.869026 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-11 00:41:49.869038 | orchestrator |  "vg_name": "ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf" 2025-11-11 00:41:49.869050 | orchestrator |  } 2025-11-11 00:41:49.869061 | orchestrator |  ] 2025-11-11 00:41:49.869073 | orchestrator |  } 2025-11-11 00:41:49.869085 | orchestrator | } 2025-11-11 00:41:49.869097 | orchestrator | 2025-11-11 00:41:49.869109 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-11 00:41:49.869122 | orchestrator | 2025-11-11 00:41:49.869134 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-11 00:41:49.869146 | orchestrator | Tuesday 11 November 2025 00:41:44 +0000 (0:00:00.450) 0:00:46.950 ****** 2025-11-11 00:41:49.869159 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-11 00:41:49.869172 | orchestrator | 2025-11-11 00:41:49.869184 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-11 00:41:49.869197 | orchestrator | Tuesday 11 November 2025 00:41:44 +0000 (0:00:00.241) 0:00:47.191 ****** 2025-11-11 00:41:49.869209 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:41:49.869222 | orchestrator | 2025-11-11 00:41:49.869234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869246 | orchestrator | Tuesday 11 November 2025 00:41:45 +0000 (0:00:00.223) 0:00:47.415 ****** 2025-11-11 00:41:49.869258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-11 00:41:49.869270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-11 00:41:49.869283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-11 00:41:49.869295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-11 00:41:49.869307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-11 00:41:49.869319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-11 00:41:49.869331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-11 00:41:49.869343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-11 00:41:49.869364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-11 00:41:49.869375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-11 00:41:49.869386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-11 00:41:49.869396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-11 00:41:49.869407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-11 00:41:49.869422 | orchestrator | 2025-11-11 00:41:49.869433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869444 | orchestrator | Tuesday 11 November 2025 00:41:45 +0000 (0:00:00.399) 0:00:47.814 ****** 2025-11-11 00:41:49.869455 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869465 | orchestrator | 2025-11-11 00:41:49.869476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869487 | orchestrator | Tuesday 11 November 2025 00:41:45 +0000 (0:00:00.208) 0:00:48.023 ****** 2025-11-11 00:41:49.869498 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869509 | orchestrator | 2025-11-11 00:41:49.869520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869549 | orchestrator | Tuesday 11 November 2025 00:41:45 +0000 (0:00:00.192) 0:00:48.215 ****** 2025-11-11 00:41:49.869560 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869571 | orchestrator | 2025-11-11 00:41:49.869581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869673 | orchestrator | Tuesday 11 November 2025 00:41:46 +0000 (0:00:00.191) 0:00:48.406 ****** 2025-11-11 00:41:49.869687 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869707 | orchestrator | 2025-11-11 00:41:49.869719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869730 | orchestrator | Tuesday 11 November 2025 00:41:46 +0000 (0:00:00.196) 0:00:48.603 ****** 2025-11-11 00:41:49.869741 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869752 | orchestrator | 2025-11-11 00:41:49.869763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869774 | orchestrator | Tuesday 11 November 2025 00:41:46 +0000 (0:00:00.549) 0:00:49.153 ****** 2025-11-11 00:41:49.869785 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869796 | orchestrator | 2025-11-11 00:41:49.869807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869817 | orchestrator | Tuesday 11 November 2025 00:41:47 +0000 (0:00:00.199) 0:00:49.353 ****** 2025-11-11 00:41:49.869828 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869839 | orchestrator | 2025-11-11 00:41:49.869850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869861 | orchestrator | Tuesday 11 November 2025 00:41:47 +0000 (0:00:00.197) 0:00:49.550 ****** 2025-11-11 00:41:49.869871 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:49.869882 | orchestrator | 2025-11-11 00:41:49.869893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869904 | orchestrator | Tuesday 11 November 2025 00:41:47 +0000 (0:00:00.184) 0:00:49.734 ****** 2025-11-11 00:41:49.869915 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46) 2025-11-11 00:41:49.869927 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46) 2025-11-11 00:41:49.869938 | orchestrator | 2025-11-11 00:41:49.869949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.869960 | orchestrator | Tuesday 11 November 2025 00:41:47 +0000 (0:00:00.409) 0:00:50.144 ****** 2025-11-11 00:41:49.869971 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e) 2025-11-11 00:41:49.869982 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e) 2025-11-11 00:41:49.870000 | orchestrator | 2025-11-11 00:41:49.870076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.870091 | orchestrator | Tuesday 11 November 2025 00:41:48 +0000 (0:00:00.404) 0:00:50.549 ****** 2025-11-11 00:41:49.870102 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7) 2025-11-11 00:41:49.870113 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7) 2025-11-11 00:41:49.870124 | orchestrator | 2025-11-11 00:41:49.870135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.870146 | orchestrator | Tuesday 11 November 2025 00:41:48 +0000 (0:00:00.416) 0:00:50.965 ****** 2025-11-11 00:41:49.870156 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c) 2025-11-11 00:41:49.870167 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c) 2025-11-11 00:41:49.870178 | orchestrator | 2025-11-11 00:41:49.870189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-11 00:41:49.870200 | orchestrator | Tuesday 11 November 2025 00:41:49 +0000 (0:00:00.411) 0:00:51.377 ****** 2025-11-11 00:41:49.870211 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-11 00:41:49.870222 | orchestrator | 2025-11-11 00:41:49.870232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:49.870243 | orchestrator | Tuesday 11 November 2025 00:41:49 +0000 (0:00:00.308) 0:00:51.686 ****** 2025-11-11 00:41:49.870254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-11 00:41:49.870265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-11 00:41:49.870275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-11 00:41:49.870286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-11 00:41:49.870297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-11 00:41:49.870308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-11 00:41:49.870319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-11 00:41:49.870330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-11 00:41:49.870341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-11 00:41:49.870352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-11 00:41:49.870363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-11 00:41:49.870383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-11 00:41:59.418918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-11 00:41:59.419057 | orchestrator | 2025-11-11 00:41:59.419076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419124 | orchestrator | Tuesday 11 November 2025 00:41:49 +0000 (0:00:00.391) 0:00:52.078 ****** 2025-11-11 00:41:59.419138 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419150 | orchestrator | 2025-11-11 00:41:59.419161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419172 | orchestrator | Tuesday 11 November 2025 00:41:50 +0000 (0:00:00.197) 0:00:52.276 ****** 2025-11-11 00:41:59.419183 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419194 | orchestrator | 2025-11-11 00:41:59.419205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419240 | orchestrator | Tuesday 11 November 2025 00:41:50 +0000 (0:00:00.619) 0:00:52.895 ****** 2025-11-11 00:41:59.419251 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419262 | orchestrator | 2025-11-11 00:41:59.419273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419283 | orchestrator | Tuesday 11 November 2025 00:41:50 +0000 (0:00:00.206) 0:00:53.102 ****** 2025-11-11 00:41:59.419294 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419305 | orchestrator | 2025-11-11 00:41:59.419315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419326 | orchestrator | Tuesday 11 November 2025 00:41:51 +0000 (0:00:00.193) 0:00:53.296 ****** 2025-11-11 00:41:59.419336 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419347 | orchestrator | 2025-11-11 00:41:59.419358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419368 | orchestrator | Tuesday 11 November 2025 00:41:51 +0000 (0:00:00.192) 0:00:53.488 ****** 2025-11-11 00:41:59.419379 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419390 | orchestrator | 2025-11-11 00:41:59.419400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419411 | orchestrator | Tuesday 11 November 2025 00:41:51 +0000 (0:00:00.200) 0:00:53.689 ****** 2025-11-11 00:41:59.419422 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419434 | orchestrator | 2025-11-11 00:41:59.419446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419459 | orchestrator | Tuesday 11 November 2025 00:41:51 +0000 (0:00:00.191) 0:00:53.880 ****** 2025-11-11 00:41:59.419471 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419482 | orchestrator | 2025-11-11 00:41:59.419495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419521 | orchestrator | Tuesday 11 November 2025 00:41:51 +0000 (0:00:00.177) 0:00:54.058 ****** 2025-11-11 00:41:59.419534 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-11 00:41:59.419547 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-11 00:41:59.419559 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-11 00:41:59.419571 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-11 00:41:59.419583 | orchestrator | 2025-11-11 00:41:59.419595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419607 | orchestrator | Tuesday 11 November 2025 00:41:52 +0000 (0:00:00.610) 0:00:54.668 ****** 2025-11-11 00:41:59.419619 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419631 | orchestrator | 2025-11-11 00:41:59.419667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419680 | orchestrator | Tuesday 11 November 2025 00:41:52 +0000 (0:00:00.194) 0:00:54.863 ****** 2025-11-11 00:41:59.419693 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419705 | orchestrator | 2025-11-11 00:41:59.419717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419729 | orchestrator | Tuesday 11 November 2025 00:41:52 +0000 (0:00:00.193) 0:00:55.056 ****** 2025-11-11 00:41:59.419741 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419753 | orchestrator | 2025-11-11 00:41:59.419765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-11 00:41:59.419778 | orchestrator | Tuesday 11 November 2025 00:41:53 +0000 (0:00:00.186) 0:00:55.243 ****** 2025-11-11 00:41:59.419788 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419799 | orchestrator | 2025-11-11 00:41:59.419810 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-11 00:41:59.419821 | orchestrator | Tuesday 11 November 2025 00:41:53 +0000 (0:00:00.194) 0:00:55.437 ****** 2025-11-11 00:41:59.419831 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.419847 | orchestrator | 2025-11-11 00:41:59.419865 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-11 00:41:59.419887 | orchestrator | Tuesday 11 November 2025 00:41:53 +0000 (0:00:00.296) 0:00:55.733 ****** 2025-11-11 00:41:59.419899 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'af11c135-cf10-5d68-b776-281fb5d39e8e'}}) 2025-11-11 00:41:59.419910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1515626-32f0-5abe-9383-a4f06f352cf6'}}) 2025-11-11 00:41:59.419921 | orchestrator | 2025-11-11 00:41:59.419932 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-11 00:41:59.419943 | orchestrator | Tuesday 11 November 2025 00:41:53 +0000 (0:00:00.195) 0:00:55.929 ****** 2025-11-11 00:41:59.419955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'}) 2025-11-11 00:41:59.419968 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'}) 2025-11-11 00:41:59.419979 | orchestrator | 2025-11-11 00:41:59.419989 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-11 00:41:59.420020 | orchestrator | Tuesday 11 November 2025 00:41:56 +0000 (0:00:02.821) 0:00:58.750 ****** 2025-11-11 00:41:59.420031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:41:59.420044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:41:59.420055 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420065 | orchestrator | 2025-11-11 00:41:59.420076 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-11 00:41:59.420087 | orchestrator | Tuesday 11 November 2025 00:41:56 +0000 (0:00:00.163) 0:00:58.914 ****** 2025-11-11 00:41:59.420098 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'}) 2025-11-11 00:41:59.420109 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'}) 2025-11-11 00:41:59.420120 | orchestrator | 2025-11-11 00:41:59.420130 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-11 00:41:59.420141 | orchestrator | Tuesday 11 November 2025 00:41:57 +0000 (0:00:01.270) 0:01:00.185 ****** 2025-11-11 00:41:59.420152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:41:59.420163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:41:59.420173 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420184 | orchestrator | 2025-11-11 00:41:59.420195 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-11 00:41:59.420205 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.146) 0:01:00.331 ****** 2025-11-11 00:41:59.420216 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420227 | orchestrator | 2025-11-11 00:41:59.420237 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-11 00:41:59.420248 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.127) 0:01:00.459 ****** 2025-11-11 00:41:59.420264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:41:59.420275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:41:59.420286 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420304 | orchestrator | 2025-11-11 00:41:59.420315 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-11 00:41:59.420325 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.132) 0:01:00.591 ****** 2025-11-11 00:41:59.420336 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420347 | orchestrator | 2025-11-11 00:41:59.420358 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-11 00:41:59.420368 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.149) 0:01:00.741 ****** 2025-11-11 00:41:59.420379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:41:59.420390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:41:59.420400 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420411 | orchestrator | 2025-11-11 00:41:59.420422 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-11 00:41:59.420432 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.151) 0:01:00.893 ****** 2025-11-11 00:41:59.420443 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420454 | orchestrator | 2025-11-11 00:41:59.420464 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-11 00:41:59.420475 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.124) 0:01:01.017 ****** 2025-11-11 00:41:59.420486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:41:59.420496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:41:59.420507 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:41:59.420518 | orchestrator | 2025-11-11 00:41:59.420528 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-11 00:41:59.420539 | orchestrator | Tuesday 11 November 2025 00:41:58 +0000 (0:00:00.150) 0:01:01.168 ****** 2025-11-11 00:41:59.420550 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:41:59.420561 | orchestrator | 2025-11-11 00:41:59.420571 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-11 00:41:59.420582 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.323) 0:01:01.491 ****** 2025-11-11 00:41:59.420598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:05.171020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:05.171155 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171184 | orchestrator | 2025-11-11 00:42:05.171197 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-11 00:42:05.171212 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.143) 0:01:01.635 ****** 2025-11-11 00:42:05.171223 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:05.171235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:05.171245 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171256 | orchestrator | 2025-11-11 00:42:05.171267 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-11 00:42:05.171278 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.144) 0:01:01.779 ****** 2025-11-11 00:42:05.171289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:05.171324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:05.171335 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171346 | orchestrator | 2025-11-11 00:42:05.171357 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-11 00:42:05.171367 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.158) 0:01:01.938 ****** 2025-11-11 00:42:05.171378 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171389 | orchestrator | 2025-11-11 00:42:05.171400 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-11 00:42:05.171410 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.152) 0:01:02.090 ****** 2025-11-11 00:42:05.171421 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171431 | orchestrator | 2025-11-11 00:42:05.171442 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-11 00:42:05.171453 | orchestrator | Tuesday 11 November 2025 00:41:59 +0000 (0:00:00.128) 0:01:02.218 ****** 2025-11-11 00:42:05.171464 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171475 | orchestrator | 2025-11-11 00:42:05.171485 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-11 00:42:05.171496 | orchestrator | Tuesday 11 November 2025 00:42:00 +0000 (0:00:00.129) 0:01:02.348 ****** 2025-11-11 00:42:05.171507 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:42:05.171519 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-11 00:42:05.171532 | orchestrator | } 2025-11-11 00:42:05.171545 | orchestrator | 2025-11-11 00:42:05.171557 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-11 00:42:05.171570 | orchestrator | Tuesday 11 November 2025 00:42:00 +0000 (0:00:00.146) 0:01:02.495 ****** 2025-11-11 00:42:05.171582 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:42:05.171595 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-11 00:42:05.171608 | orchestrator | } 2025-11-11 00:42:05.171621 | orchestrator | 2025-11-11 00:42:05.171634 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-11 00:42:05.171672 | orchestrator | Tuesday 11 November 2025 00:42:00 +0000 (0:00:00.128) 0:01:02.623 ****** 2025-11-11 00:42:05.171684 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:42:05.171697 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-11 00:42:05.171710 | orchestrator | } 2025-11-11 00:42:05.171723 | orchestrator | 2025-11-11 00:42:05.171735 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-11 00:42:05.171748 | orchestrator | Tuesday 11 November 2025 00:42:00 +0000 (0:00:00.140) 0:01:02.764 ****** 2025-11-11 00:42:05.171760 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:05.171772 | orchestrator | 2025-11-11 00:42:05.171785 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-11 00:42:05.171797 | orchestrator | Tuesday 11 November 2025 00:42:01 +0000 (0:00:00.484) 0:01:03.249 ****** 2025-11-11 00:42:05.171809 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:05.171822 | orchestrator | 2025-11-11 00:42:05.171834 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-11 00:42:05.171846 | orchestrator | Tuesday 11 November 2025 00:42:01 +0000 (0:00:00.506) 0:01:03.755 ****** 2025-11-11 00:42:05.171858 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:05.171871 | orchestrator | 2025-11-11 00:42:05.171883 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-11 00:42:05.171894 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.696) 0:01:04.452 ****** 2025-11-11 00:42:05.171905 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:05.171916 | orchestrator | 2025-11-11 00:42:05.171926 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-11 00:42:05.171937 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.136) 0:01:04.588 ****** 2025-11-11 00:42:05.171956 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.171967 | orchestrator | 2025-11-11 00:42:05.171996 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-11 00:42:05.172007 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.112) 0:01:04.700 ****** 2025-11-11 00:42:05.172018 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172028 | orchestrator | 2025-11-11 00:42:05.172039 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-11 00:42:05.172050 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.114) 0:01:04.815 ****** 2025-11-11 00:42:05.172060 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:42:05.172071 | orchestrator |  "vgs_report": { 2025-11-11 00:42:05.172082 | orchestrator |  "vg": [] 2025-11-11 00:42:05.172110 | orchestrator |  } 2025-11-11 00:42:05.172122 | orchestrator | } 2025-11-11 00:42:05.172132 | orchestrator | 2025-11-11 00:42:05.172143 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-11 00:42:05.172154 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.132) 0:01:04.947 ****** 2025-11-11 00:42:05.172165 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172175 | orchestrator | 2025-11-11 00:42:05.172186 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-11 00:42:05.172196 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.123) 0:01:05.071 ****** 2025-11-11 00:42:05.172207 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172217 | orchestrator | 2025-11-11 00:42:05.172228 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-11 00:42:05.172238 | orchestrator | Tuesday 11 November 2025 00:42:02 +0000 (0:00:00.137) 0:01:05.208 ****** 2025-11-11 00:42:05.172249 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172260 | orchestrator | 2025-11-11 00:42:05.172270 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-11 00:42:05.172281 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.119) 0:01:05.328 ****** 2025-11-11 00:42:05.172291 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172302 | orchestrator | 2025-11-11 00:42:05.172313 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-11 00:42:05.172323 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.134) 0:01:05.462 ****** 2025-11-11 00:42:05.172334 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172344 | orchestrator | 2025-11-11 00:42:05.172355 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-11 00:42:05.172366 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.146) 0:01:05.609 ****** 2025-11-11 00:42:05.172376 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172387 | orchestrator | 2025-11-11 00:42:05.172398 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-11 00:42:05.172408 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.119) 0:01:05.729 ****** 2025-11-11 00:42:05.172419 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172429 | orchestrator | 2025-11-11 00:42:05.172440 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-11 00:42:05.172451 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.128) 0:01:05.857 ****** 2025-11-11 00:42:05.172461 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172472 | orchestrator | 2025-11-11 00:42:05.172482 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-11 00:42:05.172497 | orchestrator | Tuesday 11 November 2025 00:42:03 +0000 (0:00:00.286) 0:01:06.144 ****** 2025-11-11 00:42:05.172508 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172519 | orchestrator | 2025-11-11 00:42:05.172529 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-11 00:42:05.172540 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.134) 0:01:06.278 ****** 2025-11-11 00:42:05.172551 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172569 | orchestrator | 2025-11-11 00:42:05.172580 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-11 00:42:05.172591 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.130) 0:01:06.409 ****** 2025-11-11 00:42:05.172601 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172612 | orchestrator | 2025-11-11 00:42:05.172623 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-11 00:42:05.172633 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.133) 0:01:06.542 ****** 2025-11-11 00:42:05.172660 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172671 | orchestrator | 2025-11-11 00:42:05.172682 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-11 00:42:05.172692 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.132) 0:01:06.675 ****** 2025-11-11 00:42:05.172703 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172713 | orchestrator | 2025-11-11 00:42:05.172724 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-11 00:42:05.172734 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.123) 0:01:06.798 ****** 2025-11-11 00:42:05.172745 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172755 | orchestrator | 2025-11-11 00:42:05.172766 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-11 00:42:05.172777 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.129) 0:01:06.928 ****** 2025-11-11 00:42:05.172787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:05.172798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:05.172809 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172819 | orchestrator | 2025-11-11 00:42:05.172830 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-11 00:42:05.172841 | orchestrator | Tuesday 11 November 2025 00:42:04 +0000 (0:00:00.157) 0:01:07.085 ****** 2025-11-11 00:42:05.172851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:05.172862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:05.172873 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:05.172883 | orchestrator | 2025-11-11 00:42:05.172894 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-11 00:42:05.172905 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.148) 0:01:07.233 ****** 2025-11-11 00:42:05.172923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097190 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097205 | orchestrator | 2025-11-11 00:42:08.097219 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-11 00:42:08.097232 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.153) 0:01:07.387 ****** 2025-11-11 00:42:08.097244 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097266 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097277 | orchestrator | 2025-11-11 00:42:08.097313 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-11 00:42:08.097324 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.155) 0:01:07.542 ****** 2025-11-11 00:42:08.097335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097357 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097368 | orchestrator | 2025-11-11 00:42:08.097379 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-11 00:42:08.097389 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.152) 0:01:07.694 ****** 2025-11-11 00:42:08.097400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097426 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097438 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097449 | orchestrator | 2025-11-11 00:42:08.097459 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-11 00:42:08.097470 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.333) 0:01:08.028 ****** 2025-11-11 00:42:08.097481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097503 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097514 | orchestrator | 2025-11-11 00:42:08.097525 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-11 00:42:08.097536 | orchestrator | Tuesday 11 November 2025 00:42:05 +0000 (0:00:00.158) 0:01:08.187 ****** 2025-11-11 00:42:08.097546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097568 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097578 | orchestrator | 2025-11-11 00:42:08.097591 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-11 00:42:08.097604 | orchestrator | Tuesday 11 November 2025 00:42:06 +0000 (0:00:00.162) 0:01:08.349 ****** 2025-11-11 00:42:08.097616 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:08.097631 | orchestrator | 2025-11-11 00:42:08.097708 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-11 00:42:08.097721 | orchestrator | Tuesday 11 November 2025 00:42:06 +0000 (0:00:00.517) 0:01:08.866 ****** 2025-11-11 00:42:08.097734 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:08.097746 | orchestrator | 2025-11-11 00:42:08.097759 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-11 00:42:08.097772 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.519) 0:01:09.385 ****** 2025-11-11 00:42:08.097784 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:08.097796 | orchestrator | 2025-11-11 00:42:08.097809 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-11 00:42:08.097821 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.139) 0:01:09.525 ****** 2025-11-11 00:42:08.097834 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'vg_name': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'}) 2025-11-11 00:42:08.097858 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'vg_name': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'}) 2025-11-11 00:42:08.097871 | orchestrator | 2025-11-11 00:42:08.097883 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-11 00:42:08.097896 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.173) 0:01:09.698 ****** 2025-11-11 00:42:08.097925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.097939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.097952 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.097963 | orchestrator | 2025-11-11 00:42:08.097975 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-11 00:42:08.097986 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.150) 0:01:09.849 ****** 2025-11-11 00:42:08.097997 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.098008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.098083 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.098098 | orchestrator | 2025-11-11 00:42:08.098109 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-11 00:42:08.098120 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.150) 0:01:10.000 ****** 2025-11-11 00:42:08.098131 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'})  2025-11-11 00:42:08.098142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'})  2025-11-11 00:42:08.098153 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:08.098164 | orchestrator | 2025-11-11 00:42:08.098175 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-11 00:42:08.098186 | orchestrator | Tuesday 11 November 2025 00:42:07 +0000 (0:00:00.147) 0:01:10.148 ****** 2025-11-11 00:42:08.098196 | orchestrator | ok: [testbed-node-5] => { 2025-11-11 00:42:08.098207 | orchestrator |  "lvm_report": { 2025-11-11 00:42:08.098225 | orchestrator |  "lv": [ 2025-11-11 00:42:08.098237 | orchestrator |  { 2025-11-11 00:42:08.098248 | orchestrator |  "lv_name": "osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6", 2025-11-11 00:42:08.098260 | orchestrator |  "vg_name": "ceph-a1515626-32f0-5abe-9383-a4f06f352cf6" 2025-11-11 00:42:08.098271 | orchestrator |  }, 2025-11-11 00:42:08.098282 | orchestrator |  { 2025-11-11 00:42:08.098293 | orchestrator |  "lv_name": "osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e", 2025-11-11 00:42:08.098304 | orchestrator |  "vg_name": "ceph-af11c135-cf10-5d68-b776-281fb5d39e8e" 2025-11-11 00:42:08.098314 | orchestrator |  } 2025-11-11 00:42:08.098325 | orchestrator |  ], 2025-11-11 00:42:08.098336 | orchestrator |  "pv": [ 2025-11-11 00:42:08.098347 | orchestrator |  { 2025-11-11 00:42:08.098358 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-11 00:42:08.098369 | orchestrator |  "vg_name": "ceph-af11c135-cf10-5d68-b776-281fb5d39e8e" 2025-11-11 00:42:08.098380 | orchestrator |  }, 2025-11-11 00:42:08.098391 | orchestrator |  { 2025-11-11 00:42:08.098402 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-11 00:42:08.098413 | orchestrator |  "vg_name": "ceph-a1515626-32f0-5abe-9383-a4f06f352cf6" 2025-11-11 00:42:08.098432 | orchestrator |  } 2025-11-11 00:42:08.098443 | orchestrator |  ] 2025-11-11 00:42:08.098453 | orchestrator |  } 2025-11-11 00:42:08.098464 | orchestrator | } 2025-11-11 00:42:08.098476 | orchestrator | 2025-11-11 00:42:08.098487 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:42:08.098498 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-11 00:42:08.098509 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-11 00:42:08.098520 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-11 00:42:08.098531 | orchestrator | 2025-11-11 00:42:08.098542 | orchestrator | 2025-11-11 00:42:08.098553 | orchestrator | 2025-11-11 00:42:08.098564 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:42:08.098575 | orchestrator | Tuesday 11 November 2025 00:42:08 +0000 (0:00:00.141) 0:01:10.289 ****** 2025-11-11 00:42:08.098586 | orchestrator | =============================================================================== 2025-11-11 00:42:08.098597 | orchestrator | Create block VGs -------------------------------------------------------- 6.47s 2025-11-11 00:42:08.098608 | orchestrator | Create block LVs -------------------------------------------------------- 3.95s 2025-11-11 00:42:08.098618 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2025-11-11 00:42:08.098630 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.66s 2025-11-11 00:42:08.098658 | orchestrator | Add known partitions to the list of available block devices ------------- 1.59s 2025-11-11 00:42:08.098670 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-11-11 00:42:08.098681 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-11-11 00:42:08.098692 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-11-11 00:42:08.098725 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2025-11-11 00:42:08.445032 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2025-11-11 00:42:08.445138 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2025-11-11 00:42:08.445151 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-11-11 00:42:08.445163 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-11-11 00:42:08.445174 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.73s 2025-11-11 00:42:08.445185 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-11-11 00:42:08.445196 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2025-11-11 00:42:08.445207 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.64s 2025-11-11 00:42:08.445217 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.64s 2025-11-11 00:42:08.445228 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-11-11 00:42:08.445239 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2025-11-11 00:42:20.838168 | orchestrator | 2025-11-11 00:42:20 | INFO  | Task cafe305d-0b70-4b68-8e3e-75a6ab2abc1b (facts) was prepared for execution. 2025-11-11 00:42:20.838285 | orchestrator | 2025-11-11 00:42:20 | INFO  | It takes a moment until task cafe305d-0b70-4b68-8e3e-75a6ab2abc1b (facts) has been started and output is visible here. 2025-11-11 00:42:32.406165 | orchestrator | 2025-11-11 00:42:32.406310 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-11 00:42:32.406328 | orchestrator | 2025-11-11 00:42:32.406355 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-11 00:42:32.407366 | orchestrator | Tuesday 11 November 2025 00:42:24 +0000 (0:00:00.195) 0:00:00.195 ****** 2025-11-11 00:42:32.407441 | orchestrator | ok: [testbed-manager] 2025-11-11 00:42:32.407457 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:42:32.407469 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:42:32.407479 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:42:32.407491 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:42:32.407501 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:42:32.407512 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:32.407523 | orchestrator | 2025-11-11 00:42:32.407536 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-11 00:42:32.407548 | orchestrator | Tuesday 11 November 2025 00:42:25 +0000 (0:00:00.930) 0:00:01.125 ****** 2025-11-11 00:42:32.407559 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:42:32.407570 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:42:32.407581 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:42:32.407592 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:42:32.407602 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:42:32.407613 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:42:32.407624 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:32.407654 | orchestrator | 2025-11-11 00:42:32.407666 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-11 00:42:32.407676 | orchestrator | 2025-11-11 00:42:32.407687 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-11 00:42:32.407698 | orchestrator | Tuesday 11 November 2025 00:42:26 +0000 (0:00:01.059) 0:00:02.185 ****** 2025-11-11 00:42:32.407709 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:42:32.407720 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:42:32.407730 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:42:32.407741 | orchestrator | ok: [testbed-manager] 2025-11-11 00:42:32.407752 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:42:32.407762 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:42:32.407773 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:42:32.407784 | orchestrator | 2025-11-11 00:42:32.407794 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-11 00:42:32.407805 | orchestrator | 2025-11-11 00:42:32.407816 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-11 00:42:32.407827 | orchestrator | Tuesday 11 November 2025 00:42:31 +0000 (0:00:04.778) 0:00:06.963 ****** 2025-11-11 00:42:32.407838 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:42:32.407848 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:42:32.407859 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:42:32.407870 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:42:32.407880 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:42:32.407891 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:42:32.407902 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:42:32.407913 | orchestrator | 2025-11-11 00:42:32.407923 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:42:32.407935 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.407948 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.407959 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.407970 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.407981 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.407992 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.408016 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:42:32.408027 | orchestrator | 2025-11-11 00:42:32.408038 | orchestrator | 2025-11-11 00:42:32.408049 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:42:32.408060 | orchestrator | Tuesday 11 November 2025 00:42:32 +0000 (0:00:00.491) 0:00:07.454 ****** 2025-11-11 00:42:32.408070 | orchestrator | =============================================================================== 2025-11-11 00:42:32.408081 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-11-11 00:42:32.408092 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-11-11 00:42:32.408102 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.93s 2025-11-11 00:42:32.408113 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-11-11 00:42:44.657125 | orchestrator | 2025-11-11 00:42:44 | INFO  | Task 8625fb65-1c89-4abf-b13e-2fc3fde4f7a3 (frr) was prepared for execution. 2025-11-11 00:42:44.657275 | orchestrator | 2025-11-11 00:42:44 | INFO  | It takes a moment until task 8625fb65-1c89-4abf-b13e-2fc3fde4f7a3 (frr) has been started and output is visible here. 2025-11-11 00:43:10.402669 | orchestrator | 2025-11-11 00:43:10.402848 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-11-11 00:43:10.402869 | orchestrator | 2025-11-11 00:43:10.402881 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-11-11 00:43:10.402893 | orchestrator | Tuesday 11 November 2025 00:42:48 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-11-11 00:43:10.402905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-11-11 00:43:10.402918 | orchestrator | 2025-11-11 00:43:10.402929 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-11-11 00:43:10.402940 | orchestrator | Tuesday 11 November 2025 00:42:49 +0000 (0:00:00.224) 0:00:00.451 ****** 2025-11-11 00:43:10.402951 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:10.402963 | orchestrator | 2025-11-11 00:43:10.402994 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-11-11 00:43:10.403006 | orchestrator | Tuesday 11 November 2025 00:42:50 +0000 (0:00:01.143) 0:00:01.594 ****** 2025-11-11 00:43:10.403017 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:10.403028 | orchestrator | 2025-11-11 00:43:10.403038 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-11-11 00:43:10.403049 | orchestrator | Tuesday 11 November 2025 00:42:59 +0000 (0:00:09.408) 0:00:11.003 ****** 2025-11-11 00:43:10.403060 | orchestrator | ok: [testbed-manager] 2025-11-11 00:43:10.403072 | orchestrator | 2025-11-11 00:43:10.403083 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-11-11 00:43:10.403094 | orchestrator | Tuesday 11 November 2025 00:43:00 +0000 (0:00:01.001) 0:00:12.004 ****** 2025-11-11 00:43:10.403105 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:10.403115 | orchestrator | 2025-11-11 00:43:10.403126 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-11-11 00:43:10.403137 | orchestrator | Tuesday 11 November 2025 00:43:01 +0000 (0:00:00.908) 0:00:12.913 ****** 2025-11-11 00:43:10.403148 | orchestrator | ok: [testbed-manager] 2025-11-11 00:43:10.403159 | orchestrator | 2025-11-11 00:43:10.403170 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-11-11 00:43:10.403182 | orchestrator | Tuesday 11 November 2025 00:43:02 +0000 (0:00:01.154) 0:00:14.067 ****** 2025-11-11 00:43:10.403195 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:43:10.403207 | orchestrator | 2025-11-11 00:43:10.403222 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2025-11-11 00:43:10.403260 | orchestrator | Tuesday 11 November 2025 00:43:02 +0000 (0:00:00.139) 0:00:14.207 ****** 2025-11-11 00:43:10.403273 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:43:10.403285 | orchestrator | 2025-11-11 00:43:10.403298 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2025-11-11 00:43:10.403310 | orchestrator | Tuesday 11 November 2025 00:43:02 +0000 (0:00:00.154) 0:00:14.362 ****** 2025-11-11 00:43:10.403322 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:10.403333 | orchestrator | 2025-11-11 00:43:10.403346 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-11-11 00:43:10.403358 | orchestrator | Tuesday 11 November 2025 00:43:03 +0000 (0:00:00.972) 0:00:15.334 ****** 2025-11-11 00:43:10.403370 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-11-11 00:43:10.403382 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-11-11 00:43:10.403397 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-11-11 00:43:10.403409 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-11-11 00:43:10.403422 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-11-11 00:43:10.403435 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-11-11 00:43:10.403447 | orchestrator | 2025-11-11 00:43:10.403459 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-11-11 00:43:10.403471 | orchestrator | Tuesday 11 November 2025 00:43:07 +0000 (0:00:03.202) 0:00:18.536 ****** 2025-11-11 00:43:10.403483 | orchestrator | ok: [testbed-manager] 2025-11-11 00:43:10.403496 | orchestrator | 2025-11-11 00:43:10.403508 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-11-11 00:43:10.403520 | orchestrator | Tuesday 11 November 2025 00:43:08 +0000 (0:00:01.561) 0:00:20.097 ****** 2025-11-11 00:43:10.403532 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:10.403544 | orchestrator | 2025-11-11 00:43:10.403555 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:43:10.403566 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:43:10.403577 | orchestrator | 2025-11-11 00:43:10.403588 | orchestrator | 2025-11-11 00:43:10.403598 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:43:10.403609 | orchestrator | Tuesday 11 November 2025 00:43:10 +0000 (0:00:01.413) 0:00:21.511 ****** 2025-11-11 00:43:10.403620 | orchestrator | =============================================================================== 2025-11-11 00:43:10.403630 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.41s 2025-11-11 00:43:10.403641 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.20s 2025-11-11 00:43:10.403652 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.56s 2025-11-11 00:43:10.403662 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2025-11-11 00:43:10.403673 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.15s 2025-11-11 00:43:10.403702 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.14s 2025-11-11 00:43:10.403714 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2025-11-11 00:43:10.403724 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2025-11-11 00:43:10.403735 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2025-11-11 00:43:10.403746 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-11-11 00:43:10.403776 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2025-11-11 00:43:10.403797 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2025-11-11 00:43:10.705467 | orchestrator | 2025-11-11 00:43:10.707254 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Nov 11 00:43:10 UTC 2025 2025-11-11 00:43:10.707294 | orchestrator | 2025-11-11 00:43:12.608915 | orchestrator | 2025-11-11 00:43:12 | INFO  | Collection nutshell is prepared for execution 2025-11-11 00:43:12.609021 | orchestrator | 2025-11-11 00:43:12 | INFO  | A [0] - dotfiles 2025-11-11 00:43:22.670273 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - homer 2025-11-11 00:43:22.670348 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - netdata 2025-11-11 00:43:22.670360 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - openstackclient 2025-11-11 00:43:22.670373 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - phpmyadmin 2025-11-11 00:43:22.670589 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - common 2025-11-11 00:43:22.674923 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- loadbalancer 2025-11-11 00:43:22.675213 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [2] --- opensearch 2025-11-11 00:43:22.676193 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [2] --- mariadb-ng 2025-11-11 00:43:22.676214 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [3] ---- horizon 2025-11-11 00:43:22.676226 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [3] ---- keystone 2025-11-11 00:43:22.677115 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- neutron 2025-11-11 00:43:22.677137 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ wait-for-nova 2025-11-11 00:43:22.677150 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [6] ------- octavia 2025-11-11 00:43:22.678469 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- barbican 2025-11-11 00:43:22.678491 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- designate 2025-11-11 00:43:22.678894 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- ironic 2025-11-11 00:43:22.678915 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- placement 2025-11-11 00:43:22.679115 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- magnum 2025-11-11 00:43:22.680009 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- openvswitch 2025-11-11 00:43:22.680029 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [2] --- ovn 2025-11-11 00:43:22.680506 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- memcached 2025-11-11 00:43:22.680527 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- redis 2025-11-11 00:43:22.680893 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- rabbitmq-ng 2025-11-11 00:43:22.681320 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - kubernetes 2025-11-11 00:43:22.683575 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- kubeconfig 2025-11-11 00:43:22.683598 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- copy-kubeconfig 2025-11-11 00:43:22.683891 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [0] - ceph 2025-11-11 00:43:22.686448 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [1] -- ceph-pools 2025-11-11 00:43:22.686530 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [2] --- copy-ceph-keys 2025-11-11 00:43:22.686544 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [3] ---- cephclient 2025-11-11 00:43:22.686565 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2025-11-11 00:43:22.686576 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- wait-for-keystone 2025-11-11 00:43:22.686912 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ kolla-ceph-rgw 2025-11-11 00:43:22.686966 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ glance 2025-11-11 00:43:22.686978 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ cinder 2025-11-11 00:43:22.686988 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ nova 2025-11-11 00:43:22.687432 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [4] ----- prometheus 2025-11-11 00:43:22.687451 | orchestrator | 2025-11-11 00:43:22 | INFO  | A [5] ------ grafana 2025-11-11 00:43:22.874433 | orchestrator | 2025-11-11 00:43:22 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-11-11 00:43:22.874506 | orchestrator | 2025-11-11 00:43:22 | INFO  | Tasks are running in the background 2025-11-11 00:43:25.910879 | orchestrator | 2025-11-11 00:43:25 | INFO  | No task IDs specified, wait for all currently running tasks 2025-11-11 00:43:28.024483 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:28.026432 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:28.027930 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task beb69731-78f3-49bd-a9cc-bfaefdf636e5 is in state SUCCESS 2025-11-11 00:43:28.028906 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task a0dbbee3-1ecc-4c95-875b-6bc50af8bf8b is in state SUCCESS 2025-11-11 00:43:28.029682 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 94d848ae-4d55-49ec-904a-453061849e6e is in state SUCCESS 2025-11-11 00:43:28.030451 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:28.033149 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:28.033641 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:28.034403 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:28.035146 | orchestrator | 2025-11-11 00:43:28 | INFO  | Task 1a31a40d-47e2-49ff-aae0-bed3d62337ff is in state SUCCESS 2025-11-11 00:43:28.035168 | orchestrator | 2025-11-11 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:31.104318 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:31.105076 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:31.105622 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:31.106883 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:31.107143 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 5f4453b7-a32a-418f-b9f7-ae254753f6d0 is in state SUCCESS 2025-11-11 00:43:31.108264 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:31.108846 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:31.109297 | orchestrator | 2025-11-11 00:43:31 | INFO  | Task 34c5fea7-5ee0-4c19-ac2b-b2f0cb376f06 is in state SUCCESS 2025-11-11 00:43:31.109323 | orchestrator | 2025-11-11 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:34.154279 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:34.157575 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:34.157905 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task d4973fb2-69f9-401f-8d6c-7e3b130286c3 is in state SUCCESS 2025-11-11 00:43:34.158465 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task a3cd2943-5a61-493a-8412-70fe763dfbc8 is in state STARTED 2025-11-11 00:43:34.161172 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:34.163289 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:34.163630 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 6f5d8c14-07b0-4c8a-9309-acc40fbaa6dc is in state STARTED 2025-11-11 00:43:34.166678 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:34.168340 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:34.168455 | orchestrator | 2025-11-11 00:43:34 | INFO  | Task 3a8c27fa-8eee-4b7a-b5e2-8de6ec4e1361 is in state STARTED 2025-11-11 00:43:34.168471 | orchestrator | 2025-11-11 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:37.207487 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:37.207607 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:37.211347 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task a3cd2943-5a61-493a-8412-70fe763dfbc8 is in state SUCCESS 2025-11-11 00:43:37.211396 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:37.212563 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:37.215676 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 6f5d8c14-07b0-4c8a-9309-acc40fbaa6dc is in state SUCCESS 2025-11-11 00:43:37.215713 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:37.216703 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:37.222183 | orchestrator | 2025-11-11 00:43:37 | INFO  | Task 3a8c27fa-8eee-4b7a-b5e2-8de6ec4e1361 is in state SUCCESS 2025-11-11 00:43:37.222215 | orchestrator | 2025-11-11 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:40.336575 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:40.336700 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:40.336715 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:40.336728 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:40.336739 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:40.336750 | orchestrator | 2025-11-11 00:43:40 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:40.336762 | orchestrator | 2025-11-11 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:43.335212 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:43.335806 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:43.336873 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:43.338697 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:43.340206 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:43.343794 | orchestrator | 2025-11-11 00:43:43 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:43.343848 | orchestrator | 2025-11-11 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:46.392893 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:46.394429 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:46.396684 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:46.398184 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:46.402161 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state STARTED 2025-11-11 00:43:46.403373 | orchestrator | 2025-11-11 00:43:46 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:46.403474 | orchestrator | 2025-11-11 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:49.468602 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:49.468723 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:49.469043 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:49.472162 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:49.472543 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:43:49.473054 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task 57cd3812-6efd-43fc-afaa-1e0dacb8a240 is in state SUCCESS 2025-11-11 00:43:49.475022 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475049 | orchestrator | 2025-11-11 00:43:49.475055 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475061 | orchestrator | 2025-11-11 00:43:49.475067 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475072 | orchestrator | 2025-11-11 00:43:49.475078 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475084 | orchestrator | 2025-11-11 00:43:49.475089 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475095 | orchestrator | 2025-11-11 00:43:49.475101 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475107 | orchestrator | 2025-11-11 00:43:49.475113 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475118 | orchestrator | 2025-11-11 00:43:49.475124 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475148 | orchestrator | 2025-11-11 00:43:49.475154 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475160 | orchestrator | 2025-11-11 00:43:49.475165 | orchestrator | ERROR: Unable to create local directories(/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/ansible/.ansible' 2025-11-11 00:43:49.475171 | orchestrator | 2025-11-11 00:43:49.475176 | orchestrator | 2025-11-11 00:43:49.475182 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-11-11 00:43:49.475190 | orchestrator | 2025-11-11 00:43:49.475199 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-11-11 00:43:49.475207 | orchestrator | Tuesday 11 November 2025 00:43:36 +0000 (0:00:00.415) 0:00:00.415 ****** 2025-11-11 00:43:49.475216 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:43:49.475225 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:43:49.475233 | orchestrator | changed: [testbed-manager] 2025-11-11 00:43:49.475241 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:43:49.475250 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:43:49.475258 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:43:49.475267 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:43:49.475272 | orchestrator | 2025-11-11 00:43:49.475277 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-11-11 00:43:49.475282 | orchestrator | Tuesday 11 November 2025 00:43:39 +0000 (0:00:03.644) 0:00:04.059 ****** 2025-11-11 00:43:49.475288 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-11 00:43:49.475294 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-11 00:43:49.475299 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-11 00:43:49.475304 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-11 00:43:49.475309 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-11 00:43:49.475314 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-11 00:43:49.475320 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-11 00:43:49.475325 | orchestrator | 2025-11-11 00:43:49.475330 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-11-11 00:43:49.475336 | orchestrator | Tuesday 11 November 2025 00:43:41 +0000 (0:00:01.175) 0:00:05.235 ****** 2025-11-11 00:43:49.475343 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.473030', 'end': '2025-11-11 00:43:40.500797', 'delta': '0:00:00.027767', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475561 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.496475', 'end': '2025-11-11 00:43:40.505655', 'delta': '0:00:00.009180', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475583 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.498755', 'end': '2025-11-11 00:43:40.506630', 'delta': '0:00:00.007875', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475600 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.588843', 'end': '2025-11-11 00:43:40.597584', 'delta': '0:00:00.008741', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475609 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.676275', 'end': '2025-11-11 00:43:40.682222', 'delta': '0:00:00.005947', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475616 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.743854', 'end': '2025-11-11 00:43:40.751737', 'delta': '0:00:00.007883', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475622 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-11 00:43:40.862839', 'end': '2025-11-11 00:43:40.872071', 'delta': '0:00:00.009232', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-11 00:43:49.475633 | orchestrator | 2025-11-11 00:43:49.475643 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-11-11 00:43:49.475649 | orchestrator | Tuesday 11 November 2025 00:43:43 +0000 (0:00:02.077) 0:00:07.312 ****** 2025-11-11 00:43:49.475656 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-11 00:43:49.475662 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-11 00:43:49.475668 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-11 00:43:49.475674 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-11 00:43:49.475679 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-11 00:43:49.475685 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-11 00:43:49.475691 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-11 00:43:49.475697 | orchestrator | 2025-11-11 00:43:49.475703 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-11-11 00:43:49.475709 | orchestrator | Tuesday 11 November 2025 00:43:44 +0000 (0:00:01.466) 0:00:08.779 ****** 2025-11-11 00:43:49.475715 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-11-11 00:43:49.475721 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-11-11 00:43:49.475727 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-11-11 00:43:49.475733 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-11-11 00:43:49.475739 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-11-11 00:43:49.475745 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-11-11 00:43:49.475750 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-11-11 00:43:49.475756 | orchestrator | 2025-11-11 00:43:49.475762 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:43:49.475768 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475776 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475782 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475788 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475794 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475800 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475809 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:43:49.475815 | orchestrator | 2025-11-11 00:43:49.475820 | orchestrator | 2025-11-11 00:43:49.475827 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:43:49.475832 | orchestrator | Tuesday 11 November 2025 00:43:46 +0000 (0:00:02.443) 0:00:11.222 ****** 2025-11-11 00:43:49.475838 | orchestrator | =============================================================================== 2025-11-11 00:43:49.475844 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.64s 2025-11-11 00:43:49.475850 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.44s 2025-11-11 00:43:49.475856 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.08s 2025-11-11 00:43:49.475862 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.47s 2025-11-11 00:43:49.475868 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.18s 2025-11-11 00:43:49.476361 | orchestrator | 2025-11-11 00:43:49 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:49.476381 | orchestrator | 2025-11-11 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:52.505586 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:52.505727 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:52.505762 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:52.506479 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:52.507439 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:43:52.508163 | orchestrator | 2025-11-11 00:43:52 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:52.508207 | orchestrator | 2025-11-11 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:55.548053 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:55.548128 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:55.548322 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:55.548962 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:55.549865 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:43:55.550359 | orchestrator | 2025-11-11 00:43:55 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:55.550407 | orchestrator | 2025-11-11 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:43:58.577607 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:43:58.577937 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:43:58.578761 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:43:58.579807 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:43:58.583166 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:43:58.584017 | orchestrator | 2025-11-11 00:43:58 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:43:58.584271 | orchestrator | 2025-11-11 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:01.621940 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:44:01.622573 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:01.626343 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:01.626393 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:01.626405 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:01.626447 | orchestrator | 2025-11-11 00:44:01 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:44:01.626475 | orchestrator | 2025-11-11 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:04.673592 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:44:04.673706 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:04.675144 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:04.677501 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:04.677855 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:04.679941 | orchestrator | 2025-11-11 00:44:04 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:44:04.680892 | orchestrator | 2025-11-11 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:07.730509 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state STARTED 2025-11-11 00:44:07.732228 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:07.733309 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:07.734943 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:07.738898 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:07.741723 | orchestrator | 2025-11-11 00:44:07 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:44:07.742602 | orchestrator | 2025-11-11 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:10.875960 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task f80fab5d-ff4b-42ae-9c8a-36f2480cba9f is in state SUCCESS 2025-11-11 00:44:10.876116 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:10.876132 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:10.876140 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:10.876147 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:10.876153 | orchestrator | 2025-11-11 00:44:10 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:44:10.876160 | orchestrator | 2025-11-11 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:13.880850 | orchestrator | 2025-11-11 00:44:13 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:13.903671 | orchestrator | 2025-11-11 00:44:13 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:13.903770 | orchestrator | 2025-11-11 00:44:13 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:13.903777 | orchestrator | 2025-11-11 00:44:13 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:13.903783 | orchestrator | 2025-11-11 00:44:13 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state STARTED 2025-11-11 00:44:13.903821 | orchestrator | 2025-11-11 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:16.929214 | orchestrator | 2025-11-11 00:44:16 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:16.929345 | orchestrator | 2025-11-11 00:44:16 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:16.929361 | orchestrator | 2025-11-11 00:44:16 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:16.929910 | orchestrator | 2025-11-11 00:44:16 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:16.930121 | orchestrator | 2025-11-11 00:44:16 | INFO  | Task 5281b47c-7183-4e6d-be79-57f3be83d609 is in state SUCCESS 2025-11-11 00:44:16.930224 | orchestrator | 2025-11-11 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:19.990520 | orchestrator | 2025-11-11 00:44:19 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:19.992503 | orchestrator | 2025-11-11 00:44:19 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:19.995401 | orchestrator | 2025-11-11 00:44:19 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:19.997537 | orchestrator | 2025-11-11 00:44:19 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:19.997647 | orchestrator | 2025-11-11 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:23.086696 | orchestrator | 2025-11-11 00:44:23 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:23.086815 | orchestrator | 2025-11-11 00:44:23 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:23.162864 | orchestrator | 2025-11-11 00:44:23 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:23.162955 | orchestrator | 2025-11-11 00:44:23 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:23.162968 | orchestrator | 2025-11-11 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:26.201229 | orchestrator | 2025-11-11 00:44:26 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:26.201763 | orchestrator | 2025-11-11 00:44:26 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:26.202278 | orchestrator | 2025-11-11 00:44:26 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:26.203194 | orchestrator | 2025-11-11 00:44:26 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:26.204665 | orchestrator | 2025-11-11 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:29.236588 | orchestrator | 2025-11-11 00:44:29 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:29.236741 | orchestrator | 2025-11-11 00:44:29 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:29.237616 | orchestrator | 2025-11-11 00:44:29 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state STARTED 2025-11-11 00:44:29.238599 | orchestrator | 2025-11-11 00:44:29 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:29.238930 | orchestrator | 2025-11-11 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:32.312341 | orchestrator | 2025-11-11 00:44:32 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:32.314793 | orchestrator | 2025-11-11 00:44:32 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:32.316770 | orchestrator | 2025-11-11 00:44:32 | INFO  | Task 87656c84-fdf7-4584-b0ee-ca7d041e7ef7 is in state SUCCESS 2025-11-11 00:44:32.317561 | orchestrator | 2025-11-11 00:44:32.317704 | orchestrator | 2025-11-11 00:44:32.317730 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-11-11 00:44:32.317751 | orchestrator | 2025-11-11 00:44:32.317771 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-11-11 00:44:32.317787 | orchestrator | Tuesday 11 November 2025 00:43:34 +0000 (0:00:00.348) 0:00:00.348 ****** 2025-11-11 00:44:32.317798 | orchestrator | ok: [testbed-manager] => { 2025-11-11 00:44:32.317811 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-11-11 00:44:32.317824 | orchestrator | } 2025-11-11 00:44:32.317836 | orchestrator | 2025-11-11 00:44:32.317846 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-11-11 00:44:32.317857 | orchestrator | Tuesday 11 November 2025 00:43:34 +0000 (0:00:00.133) 0:00:00.481 ****** 2025-11-11 00:44:32.317867 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.317879 | orchestrator | 2025-11-11 00:44:32.317890 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-11-11 00:44:32.317901 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:01.221) 0:00:01.703 ****** 2025-11-11 00:44:32.317911 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-11-11 00:44:32.317922 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-11-11 00:44:32.317933 | orchestrator | 2025-11-11 00:44:32.317944 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-11-11 00:44:32.317954 | orchestrator | Tuesday 11 November 2025 00:43:37 +0000 (0:00:01.725) 0:00:03.434 ****** 2025-11-11 00:44:32.317965 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.318912 | orchestrator | 2025-11-11 00:44:32.318931 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-11-11 00:44:32.318943 | orchestrator | Tuesday 11 November 2025 00:43:39 +0000 (0:00:02.775) 0:00:06.209 ****** 2025-11-11 00:44:32.318953 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.318964 | orchestrator | 2025-11-11 00:44:32.318975 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-11-11 00:44:32.318985 | orchestrator | Tuesday 11 November 2025 00:43:41 +0000 (0:00:01.320) 0:00:07.529 ****** 2025-11-11 00:44:32.318996 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-11-11 00:44:32.319007 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.319018 | orchestrator | 2025-11-11 00:44:32.319029 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-11-11 00:44:32.319040 | orchestrator | Tuesday 11 November 2025 00:44:06 +0000 (0:00:24.957) 0:00:32.487 ****** 2025-11-11 00:44:32.319050 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319061 | orchestrator | 2025-11-11 00:44:32.319072 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:44:32.319083 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.319096 | orchestrator | 2025-11-11 00:44:32.319136 | orchestrator | 2025-11-11 00:44:32.319176 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:44:32.319187 | orchestrator | Tuesday 11 November 2025 00:44:08 +0000 (0:00:02.250) 0:00:34.738 ****** 2025-11-11 00:44:32.319204 | orchestrator | =============================================================================== 2025-11-11 00:44:32.319229 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.96s 2025-11-11 00:44:32.319251 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.78s 2025-11-11 00:44:32.319268 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.25s 2025-11-11 00:44:32.319307 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.72s 2025-11-11 00:44:32.319324 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.32s 2025-11-11 00:44:32.319343 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.23s 2025-11-11 00:44:32.319354 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.13s 2025-11-11 00:44:32.319365 | orchestrator | 2025-11-11 00:44:32.319376 | orchestrator | 2025-11-11 00:44:32.319386 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-11-11 00:44:32.319397 | orchestrator | 2025-11-11 00:44:32.319408 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-11-11 00:44:32.319419 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.587) 0:00:00.587 ****** 2025-11-11 00:44:32.319430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-11-11 00:44:32.319442 | orchestrator | 2025-11-11 00:44:32.319453 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-11-11 00:44:32.319466 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.363) 0:00:00.951 ****** 2025-11-11 00:44:32.319478 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-11-11 00:44:32.319491 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-11-11 00:44:32.319503 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-11-11 00:44:32.319515 | orchestrator | 2025-11-11 00:44:32.319528 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-11-11 00:44:32.319539 | orchestrator | Tuesday 11 November 2025 00:43:37 +0000 (0:00:01.587) 0:00:02.538 ****** 2025-11-11 00:44:32.319552 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319565 | orchestrator | 2025-11-11 00:44:32.319577 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-11-11 00:44:32.319589 | orchestrator | Tuesday 11 November 2025 00:43:38 +0000 (0:00:01.586) 0:00:04.125 ****** 2025-11-11 00:44:32.319615 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-11-11 00:44:32.319628 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.319641 | orchestrator | 2025-11-11 00:44:32.319653 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-11-11 00:44:32.319664 | orchestrator | Tuesday 11 November 2025 00:44:09 +0000 (0:00:30.991) 0:00:35.117 ****** 2025-11-11 00:44:32.319676 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319688 | orchestrator | 2025-11-11 00:44:32.319701 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-11-11 00:44:32.319713 | orchestrator | Tuesday 11 November 2025 00:44:11 +0000 (0:00:01.227) 0:00:36.345 ****** 2025-11-11 00:44:32.319725 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.319737 | orchestrator | 2025-11-11 00:44:32.319749 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-11-11 00:44:32.319761 | orchestrator | Tuesday 11 November 2025 00:44:12 +0000 (0:00:00.894) 0:00:37.239 ****** 2025-11-11 00:44:32.319774 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319806 | orchestrator | 2025-11-11 00:44:32.319819 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-11-11 00:44:32.319831 | orchestrator | Tuesday 11 November 2025 00:44:14 +0000 (0:00:02.167) 0:00:39.406 ****** 2025-11-11 00:44:32.319842 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319852 | orchestrator | 2025-11-11 00:44:32.319863 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-11-11 00:44:32.319873 | orchestrator | Tuesday 11 November 2025 00:44:15 +0000 (0:00:01.154) 0:00:40.561 ****** 2025-11-11 00:44:32.319884 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.319894 | orchestrator | 2025-11-11 00:44:32.319905 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-11-11 00:44:32.319922 | orchestrator | Tuesday 11 November 2025 00:44:16 +0000 (0:00:00.681) 0:00:41.242 ****** 2025-11-11 00:44:32.319932 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.319943 | orchestrator | 2025-11-11 00:44:32.319953 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:44:32.319999 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.320012 | orchestrator | 2025-11-11 00:44:32.320022 | orchestrator | 2025-11-11 00:44:32.320033 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:44:32.320048 | orchestrator | Tuesday 11 November 2025 00:44:16 +0000 (0:00:00.328) 0:00:41.571 ****** 2025-11-11 00:44:32.320059 | orchestrator | =============================================================================== 2025-11-11 00:44:32.320069 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 30.99s 2025-11-11 00:44:32.320080 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.17s 2025-11-11 00:44:32.320090 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.59s 2025-11-11 00:44:32.320101 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.59s 2025-11-11 00:44:32.320111 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.23s 2025-11-11 00:44:32.320122 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.15s 2025-11-11 00:44:32.320132 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.89s 2025-11-11 00:44:32.320173 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.68s 2025-11-11 00:44:32.320184 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.36s 2025-11-11 00:44:32.320198 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.33s 2025-11-11 00:44:32.320217 | orchestrator | 2025-11-11 00:44:32.320234 | orchestrator | 2025-11-11 00:44:32.320251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-11 00:44:32.320270 | orchestrator | 2025-11-11 00:44:32.320289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-11 00:44:32.320307 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.175) 0:00:00.175 ****** 2025-11-11 00:44:32.320325 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-11-11 00:44:32.320344 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-11-11 00:44:32.320363 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-11-11 00:44:32.320377 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-11-11 00:44:32.320388 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-11-11 00:44:32.320399 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-11-11 00:44:32.320409 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-11-11 00:44:32.320420 | orchestrator | 2025-11-11 00:44:32.320430 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-11-11 00:44:32.320441 | orchestrator | 2025-11-11 00:44:32.320452 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-11-11 00:44:32.320462 | orchestrator | Tuesday 11 November 2025 00:43:36 +0000 (0:00:01.404) 0:00:01.579 ****** 2025-11-11 00:44:32.320489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:44:32.320503 | orchestrator | 2025-11-11 00:44:32.320514 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-11-11 00:44:32.320524 | orchestrator | Tuesday 11 November 2025 00:43:37 +0000 (0:00:01.256) 0:00:02.836 ****** 2025-11-11 00:44:32.320535 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:44:32.320554 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:44:32.320565 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.320576 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:44:32.320587 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:44:32.320607 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:44:32.320618 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:44:32.320629 | orchestrator | 2025-11-11 00:44:32.320639 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-11-11 00:44:32.320650 | orchestrator | Tuesday 11 November 2025 00:43:39 +0000 (0:00:01.433) 0:00:04.270 ****** 2025-11-11 00:44:32.320661 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:44:32.320671 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:44:32.320682 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.320692 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:44:32.320703 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:44:32.320713 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:44:32.320724 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:44:32.320734 | orchestrator | 2025-11-11 00:44:32.320745 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-11-11 00:44:32.320755 | orchestrator | Tuesday 11 November 2025 00:43:41 +0000 (0:00:02.850) 0:00:07.120 ****** 2025-11-11 00:44:32.320766 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:44:32.320776 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:44:32.320787 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.320797 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:44:32.320807 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:44:32.320818 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:44:32.320828 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:44:32.320839 | orchestrator | 2025-11-11 00:44:32.320849 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-11-11 00:44:32.320860 | orchestrator | Tuesday 11 November 2025 00:43:43 +0000 (0:00:01.914) 0:00:09.035 ****** 2025-11-11 00:44:32.320870 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.320881 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:44:32.320891 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:44:32.320901 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:44:32.320912 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:44:32.320922 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:44:32.320933 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:44:32.320943 | orchestrator | 2025-11-11 00:44:32.320954 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-11-11 00:44:32.320965 | orchestrator | Tuesday 11 November 2025 00:43:54 +0000 (0:00:10.794) 0:00:19.829 ****** 2025-11-11 00:44:32.320975 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:44:32.320985 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:44:32.320996 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:44:32.321011 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:44:32.321022 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:44:32.321032 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:44:32.321043 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.321053 | orchestrator | 2025-11-11 00:44:32.321064 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-11-11 00:44:32.321074 | orchestrator | Tuesday 11 November 2025 00:44:15 +0000 (0:00:20.452) 0:00:40.281 ****** 2025-11-11 00:44:32.321086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:44:32.321099 | orchestrator | 2025-11-11 00:44:32.321110 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-11-11 00:44:32.321120 | orchestrator | Tuesday 11 November 2025 00:44:16 +0000 (0:00:01.328) 0:00:41.610 ****** 2025-11-11 00:44:32.321131 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-11-11 00:44:32.321202 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-11-11 00:44:32.321232 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-11-11 00:44:32.321250 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-11-11 00:44:32.321268 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-11-11 00:44:32.321285 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-11-11 00:44:32.321304 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-11-11 00:44:32.321322 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-11-11 00:44:32.321342 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-11-11 00:44:32.321359 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-11-11 00:44:32.321375 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-11-11 00:44:32.321386 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-11-11 00:44:32.321396 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-11-11 00:44:32.321407 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-11-11 00:44:32.321417 | orchestrator | 2025-11-11 00:44:32.321428 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-11-11 00:44:32.321439 | orchestrator | Tuesday 11 November 2025 00:44:20 +0000 (0:00:03.946) 0:00:45.557 ****** 2025-11-11 00:44:32.321450 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.321460 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:44:32.321471 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:44:32.321482 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:44:32.321492 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:44:32.321502 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:44:32.321513 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:44:32.321523 | orchestrator | 2025-11-11 00:44:32.321534 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-11-11 00:44:32.321545 | orchestrator | Tuesday 11 November 2025 00:44:21 +0000 (0:00:00.949) 0:00:46.507 ****** 2025-11-11 00:44:32.321555 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:44:32.321565 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.321576 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:44:32.321586 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:44:32.321597 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:44:32.321607 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:44:32.321618 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:44:32.321628 | orchestrator | 2025-11-11 00:44:32.321639 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-11-11 00:44:32.321658 | orchestrator | Tuesday 11 November 2025 00:44:22 +0000 (0:00:01.280) 0:00:47.787 ****** 2025-11-11 00:44:32.321670 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.321680 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:44:32.321691 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:44:32.321701 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:44:32.321712 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:44:32.321722 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:44:32.321733 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:44:32.321743 | orchestrator | 2025-11-11 00:44:32.321754 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-11-11 00:44:32.321764 | orchestrator | Tuesday 11 November 2025 00:44:23 +0000 (0:00:01.096) 0:00:48.884 ****** 2025-11-11 00:44:32.321775 | orchestrator | ok: [testbed-manager] 2025-11-11 00:44:32.321785 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:44:32.321796 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:44:32.321806 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:44:32.321816 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:44:32.321827 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:44:32.321837 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:44:32.321848 | orchestrator | 2025-11-11 00:44:32.321858 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-11-11 00:44:32.321869 | orchestrator | Tuesday 11 November 2025 00:44:25 +0000 (0:00:01.709) 0:00:50.593 ****** 2025-11-11 00:44:32.321887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-11-11 00:44:32.321899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:44:32.321911 | orchestrator | 2025-11-11 00:44:32.321921 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-11-11 00:44:32.321932 | orchestrator | Tuesday 11 November 2025 00:44:26 +0000 (0:00:01.184) 0:00:51.778 ****** 2025-11-11 00:44:32.321942 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.321953 | orchestrator | 2025-11-11 00:44:32.321964 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-11-11 00:44:32.321974 | orchestrator | Tuesday 11 November 2025 00:44:28 +0000 (0:00:01.612) 0:00:53.390 ****** 2025-11-11 00:44:32.321985 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:44:32.321995 | orchestrator | changed: [testbed-manager] 2025-11-11 00:44:32.322012 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:44:32.322099 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:44:32.322119 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:44:32.322161 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:44:32.322181 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:44:32.322197 | orchestrator | 2025-11-11 00:44:32.322216 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:44:32.322233 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322251 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322268 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322287 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322307 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322325 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322344 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:44:32.322363 | orchestrator | 2025-11-11 00:44:32.322382 | orchestrator | 2025-11-11 00:44:32.322400 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:44:32.322419 | orchestrator | Tuesday 11 November 2025 00:44:31 +0000 (0:00:03.057) 0:00:56.448 ****** 2025-11-11 00:44:32.322438 | orchestrator | =============================================================================== 2025-11-11 00:44:32.322456 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.45s 2025-11-11 00:44:32.322474 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.79s 2025-11-11 00:44:32.322493 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.95s 2025-11-11 00:44:32.322512 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.06s 2025-11-11 00:44:32.322529 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.85s 2025-11-11 00:44:32.322548 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.91s 2025-11-11 00:44:32.322560 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.71s 2025-11-11 00:44:32.322570 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.61s 2025-11-11 00:44:32.322592 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.43s 2025-11-11 00:44:32.322602 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.40s 2025-11-11 00:44:32.322613 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.33s 2025-11-11 00:44:32.322634 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.28s 2025-11-11 00:44:32.322645 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.26s 2025-11-11 00:44:32.322655 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.18s 2025-11-11 00:44:32.322666 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.10s 2025-11-11 00:44:32.322676 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.95s 2025-11-11 00:44:32.322687 | orchestrator | 2025-11-11 00:44:32 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:32.322698 | orchestrator | 2025-11-11 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:35.365432 | orchestrator | 2025-11-11 00:44:35 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:35.366707 | orchestrator | 2025-11-11 00:44:35 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:35.369556 | orchestrator | 2025-11-11 00:44:35 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:35.369666 | orchestrator | 2025-11-11 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:38.408070 | orchestrator | 2025-11-11 00:44:38 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:38.412816 | orchestrator | 2025-11-11 00:44:38 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:38.412898 | orchestrator | 2025-11-11 00:44:38 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:38.412913 | orchestrator | 2025-11-11 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:41.456642 | orchestrator | 2025-11-11 00:44:41 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:41.458602 | orchestrator | 2025-11-11 00:44:41 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:41.461073 | orchestrator | 2025-11-11 00:44:41 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:41.461394 | orchestrator | 2025-11-11 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:44.513104 | orchestrator | 2025-11-11 00:44:44 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:44.514638 | orchestrator | 2025-11-11 00:44:44 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:44.516749 | orchestrator | 2025-11-11 00:44:44 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:44.516778 | orchestrator | 2025-11-11 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:47.560042 | orchestrator | 2025-11-11 00:44:47 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:47.562745 | orchestrator | 2025-11-11 00:44:47 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:47.563005 | orchestrator | 2025-11-11 00:44:47 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:47.563042 | orchestrator | 2025-11-11 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:50.608133 | orchestrator | 2025-11-11 00:44:50 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:50.608664 | orchestrator | 2025-11-11 00:44:50 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:50.609594 | orchestrator | 2025-11-11 00:44:50 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:50.609791 | orchestrator | 2025-11-11 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:53.655391 | orchestrator | 2025-11-11 00:44:53 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:53.656984 | orchestrator | 2025-11-11 00:44:53 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:53.659379 | orchestrator | 2025-11-11 00:44:53 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:53.659421 | orchestrator | 2025-11-11 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:56.694392 | orchestrator | 2025-11-11 00:44:56 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:56.695693 | orchestrator | 2025-11-11 00:44:56 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:56.696955 | orchestrator | 2025-11-11 00:44:56 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state STARTED 2025-11-11 00:44:56.696985 | orchestrator | 2025-11-11 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:44:59.734607 | orchestrator | 2025-11-11 00:44:59 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:44:59.735550 | orchestrator | 2025-11-11 00:44:59 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:44:59.737105 | orchestrator | 2025-11-11 00:44:59 | INFO  | Task 6e63d2e5-32fe-4da1-9b85-4c60475b243b is in state SUCCESS 2025-11-11 00:44:59.737138 | orchestrator | 2025-11-11 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:02.777390 | orchestrator | 2025-11-11 00:45:02 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:02.779091 | orchestrator | 2025-11-11 00:45:02 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:02.779133 | orchestrator | 2025-11-11 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:05.827476 | orchestrator | 2025-11-11 00:45:05 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:05.828765 | orchestrator | 2025-11-11 00:45:05 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:05.828798 | orchestrator | 2025-11-11 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:08.881039 | orchestrator | 2025-11-11 00:45:08 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:08.884940 | orchestrator | 2025-11-11 00:45:08 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:08.885015 | orchestrator | 2025-11-11 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:11.922939 | orchestrator | 2025-11-11 00:45:11 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:11.925571 | orchestrator | 2025-11-11 00:45:11 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:11.925644 | orchestrator | 2025-11-11 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:14.968553 | orchestrator | 2025-11-11 00:45:14 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:14.969895 | orchestrator | 2025-11-11 00:45:14 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:14.969933 | orchestrator | 2025-11-11 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:18.020923 | orchestrator | 2025-11-11 00:45:18 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:18.022677 | orchestrator | 2025-11-11 00:45:18 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:18.022713 | orchestrator | 2025-11-11 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:21.066503 | orchestrator | 2025-11-11 00:45:21 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:21.067947 | orchestrator | 2025-11-11 00:45:21 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:21.067981 | orchestrator | 2025-11-11 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:24.107246 | orchestrator | 2025-11-11 00:45:24 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:24.108994 | orchestrator | 2025-11-11 00:45:24 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:24.109332 | orchestrator | 2025-11-11 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:27.154227 | orchestrator | 2025-11-11 00:45:27 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:27.157177 | orchestrator | 2025-11-11 00:45:27 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:27.157212 | orchestrator | 2025-11-11 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:30.203553 | orchestrator | 2025-11-11 00:45:30 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:30.204156 | orchestrator | 2025-11-11 00:45:30 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:30.204195 | orchestrator | 2025-11-11 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:33.244120 | orchestrator | 2025-11-11 00:45:33 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:33.244339 | orchestrator | 2025-11-11 00:45:33 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:33.244470 | orchestrator | 2025-11-11 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:36.294186 | orchestrator | 2025-11-11 00:45:36 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:36.295115 | orchestrator | 2025-11-11 00:45:36 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:36.295164 | orchestrator | 2025-11-11 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:39.330987 | orchestrator | 2025-11-11 00:45:39 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:39.331135 | orchestrator | 2025-11-11 00:45:39 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:39.331147 | orchestrator | 2025-11-11 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:42.393944 | orchestrator | 2025-11-11 00:45:42 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:42.394670 | orchestrator | 2025-11-11 00:45:42 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:42.394720 | orchestrator | 2025-11-11 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:45.435680 | orchestrator | 2025-11-11 00:45:45 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:45.436369 | orchestrator | 2025-11-11 00:45:45 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:45.436418 | orchestrator | 2025-11-11 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:48.491085 | orchestrator | 2025-11-11 00:45:48 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:48.491710 | orchestrator | 2025-11-11 00:45:48 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:48.491746 | orchestrator | 2025-11-11 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:51.528021 | orchestrator | 2025-11-11 00:45:51 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:51.529591 | orchestrator | 2025-11-11 00:45:51 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:51.529634 | orchestrator | 2025-11-11 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:54.572211 | orchestrator | 2025-11-11 00:45:54 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:54.573273 | orchestrator | 2025-11-11 00:45:54 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:54.574748 | orchestrator | 2025-11-11 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:45:57.609110 | orchestrator | 2025-11-11 00:45:57 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:45:57.609825 | orchestrator | 2025-11-11 00:45:57 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:45:57.610639 | orchestrator | 2025-11-11 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:00.655463 | orchestrator | 2025-11-11 00:46:00 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:00.655596 | orchestrator | 2025-11-11 00:46:00 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:00.655613 | orchestrator | 2025-11-11 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:03.709402 | orchestrator | 2025-11-11 00:46:03 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:03.710227 | orchestrator | 2025-11-11 00:46:03 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:03.710280 | orchestrator | 2025-11-11 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:06.745969 | orchestrator | 2025-11-11 00:46:06 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:06.747178 | orchestrator | 2025-11-11 00:46:06 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:06.747300 | orchestrator | 2025-11-11 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:09.787182 | orchestrator | 2025-11-11 00:46:09 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:09.787334 | orchestrator | 2025-11-11 00:46:09 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:09.787361 | orchestrator | 2025-11-11 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:12.833986 | orchestrator | 2025-11-11 00:46:12 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:12.835247 | orchestrator | 2025-11-11 00:46:12 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:12.836242 | orchestrator | 2025-11-11 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:15.871083 | orchestrator | 2025-11-11 00:46:15 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:15.872206 | orchestrator | 2025-11-11 00:46:15 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:15.872252 | orchestrator | 2025-11-11 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:18.923077 | orchestrator | 2025-11-11 00:46:18 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:18.923552 | orchestrator | 2025-11-11 00:46:18 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:18.923641 | orchestrator | 2025-11-11 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:21.963067 | orchestrator | 2025-11-11 00:46:21 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:21.964633 | orchestrator | 2025-11-11 00:46:21 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:21.964673 | orchestrator | 2025-11-11 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:25.003907 | orchestrator | 2025-11-11 00:46:25 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:25.004112 | orchestrator | 2025-11-11 00:46:25 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:25.004133 | orchestrator | 2025-11-11 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:28.037344 | orchestrator | 2025-11-11 00:46:28 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:28.039487 | orchestrator | 2025-11-11 00:46:28 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:28.039524 | orchestrator | 2025-11-11 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:31.086488 | orchestrator | 2025-11-11 00:46:31 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:31.088608 | orchestrator | 2025-11-11 00:46:31 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:31.088675 | orchestrator | 2025-11-11 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:34.135432 | orchestrator | 2025-11-11 00:46:34 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:34.139081 | orchestrator | 2025-11-11 00:46:34 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:34.139158 | orchestrator | 2025-11-11 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:37.184404 | orchestrator | 2025-11-11 00:46:37 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:37.186428 | orchestrator | 2025-11-11 00:46:37 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:37.186477 | orchestrator | 2025-11-11 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:40.226288 | orchestrator | 2025-11-11 00:46:40 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:40.226401 | orchestrator | 2025-11-11 00:46:40 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:40.226412 | orchestrator | 2025-11-11 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:43.269217 | orchestrator | 2025-11-11 00:46:43 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:43.270832 | orchestrator | 2025-11-11 00:46:43 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:43.271150 | orchestrator | 2025-11-11 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:46.307360 | orchestrator | 2025-11-11 00:46:46 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:46.310162 | orchestrator | 2025-11-11 00:46:46 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:46.310217 | orchestrator | 2025-11-11 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:49.344218 | orchestrator | 2025-11-11 00:46:49 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:49.345242 | orchestrator | 2025-11-11 00:46:49 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:49.345264 | orchestrator | 2025-11-11 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:52.390763 | orchestrator | 2025-11-11 00:46:52 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:52.392235 | orchestrator | 2025-11-11 00:46:52 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:52.392307 | orchestrator | 2025-11-11 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:55.429267 | orchestrator | 2025-11-11 00:46:55 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:55.429806 | orchestrator | 2025-11-11 00:46:55 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:55.429840 | orchestrator | 2025-11-11 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:46:58.470503 | orchestrator | 2025-11-11 00:46:58 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:46:58.472405 | orchestrator | 2025-11-11 00:46:58 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:46:58.472456 | orchestrator | 2025-11-11 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:01.509600 | orchestrator | 2025-11-11 00:47:01 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:01.510681 | orchestrator | 2025-11-11 00:47:01 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:01.510739 | orchestrator | 2025-11-11 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:04.541345 | orchestrator | 2025-11-11 00:47:04 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:04.541468 | orchestrator | 2025-11-11 00:47:04 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:04.541482 | orchestrator | 2025-11-11 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:07.583649 | orchestrator | 2025-11-11 00:47:07 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:07.585384 | orchestrator | 2025-11-11 00:47:07 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:07.585493 | orchestrator | 2025-11-11 00:47:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:10.632200 | orchestrator | 2025-11-11 00:47:10 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:10.632351 | orchestrator | 2025-11-11 00:47:10 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:10.632380 | orchestrator | 2025-11-11 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:13.672193 | orchestrator | 2025-11-11 00:47:13 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:13.673280 | orchestrator | 2025-11-11 00:47:13 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:13.673319 | orchestrator | 2025-11-11 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:16.709742 | orchestrator | 2025-11-11 00:47:16 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:16.711705 | orchestrator | 2025-11-11 00:47:16 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:16.711742 | orchestrator | 2025-11-11 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:19.757827 | orchestrator | 2025-11-11 00:47:19 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:19.758917 | orchestrator | 2025-11-11 00:47:19 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:19.758945 | orchestrator | 2025-11-11 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:22.804354 | orchestrator | 2025-11-11 00:47:22 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:22.804906 | orchestrator | 2025-11-11 00:47:22 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state STARTED 2025-11-11 00:47:22.805071 | orchestrator | 2025-11-11 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:25.858101 | orchestrator | 2025-11-11 00:47:25 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:25.863430 | orchestrator | 2025-11-11 00:47:25 | INFO  | Task 87a08623-e73f-4826-a723-35439db49203 is in state SUCCESS 2025-11-11 00:47:25.865997 | orchestrator | 2025-11-11 00:47:25.866109 | orchestrator | 2025-11-11 00:47:25.866124 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-11-11 00:47:25.866137 | orchestrator | 2025-11-11 00:47:25.866148 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-11-11 00:47:25.866160 | orchestrator | Tuesday 11 November 2025 00:43:51 +0000 (0:00:00.188) 0:00:00.188 ****** 2025-11-11 00:47:25.866171 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.866183 | orchestrator | 2025-11-11 00:47:25.866195 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-11-11 00:47:25.866205 | orchestrator | Tuesday 11 November 2025 00:43:52 +0000 (0:00:00.883) 0:00:01.072 ****** 2025-11-11 00:47:25.866217 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-11-11 00:47:25.866228 | orchestrator | 2025-11-11 00:47:25.866239 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-11-11 00:47:25.866249 | orchestrator | Tuesday 11 November 2025 00:43:52 +0000 (0:00:00.436) 0:00:01.509 ****** 2025-11-11 00:47:25.866260 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.866271 | orchestrator | 2025-11-11 00:47:25.866282 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-11-11 00:47:25.866292 | orchestrator | Tuesday 11 November 2025 00:43:53 +0000 (0:00:00.876) 0:00:02.386 ****** 2025-11-11 00:47:25.866303 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-11-11 00:47:25.866314 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.866324 | orchestrator | 2025-11-11 00:47:25.866335 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-11-11 00:47:25.866346 | orchestrator | Tuesday 11 November 2025 00:44:52 +0000 (0:00:58.443) 0:01:00.829 ****** 2025-11-11 00:47:25.866357 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.866367 | orchestrator | 2025-11-11 00:47:25.866378 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:47:25.866398 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:47:25.866431 | orchestrator | 2025-11-11 00:47:25.866442 | orchestrator | 2025-11-11 00:47:25.866453 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:47:25.866464 | orchestrator | Tuesday 11 November 2025 00:44:57 +0000 (0:00:05.600) 0:01:06.429 ****** 2025-11-11 00:47:25.866475 | orchestrator | =============================================================================== 2025-11-11 00:47:25.866486 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.44s 2025-11-11 00:47:25.866497 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.60s 2025-11-11 00:47:25.866507 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.88s 2025-11-11 00:47:25.866518 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.88s 2025-11-11 00:47:25.866529 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.44s 2025-11-11 00:47:25.866539 | orchestrator | 2025-11-11 00:47:25.866550 | orchestrator | 2025-11-11 00:47:25.866563 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-11-11 00:47:25.866576 | orchestrator | 2025-11-11 00:47:25.866588 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-11-11 00:47:25.866600 | orchestrator | Tuesday 11 November 2025 00:43:27 +0000 (0:00:00.143) 0:00:00.143 ****** 2025-11-11 00:47:25.866613 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:47:25.866625 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:47:25.866637 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:47:25.866651 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.866670 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.866689 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.866706 | orchestrator | 2025-11-11 00:47:25.866724 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-11-11 00:47:25.866745 | orchestrator | Tuesday 11 November 2025 00:43:28 +0000 (0:00:00.654) 0:00:00.798 ****** 2025-11-11 00:47:25.866765 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.866785 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.866824 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.866837 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.866847 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.866858 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.866868 | orchestrator | 2025-11-11 00:47:25.866879 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-11-11 00:47:25.866890 | orchestrator | Tuesday 11 November 2025 00:43:29 +0000 (0:00:00.597) 0:00:01.395 ****** 2025-11-11 00:47:25.866900 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.866911 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.866921 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.866932 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.866942 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.866952 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.866963 | orchestrator | 2025-11-11 00:47:25.866974 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-11-11 00:47:25.866984 | orchestrator | Tuesday 11 November 2025 00:43:29 +0000 (0:00:00.639) 0:00:02.035 ****** 2025-11-11 00:47:25.866995 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.867005 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.867016 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.867026 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.867037 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.867047 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.867057 | orchestrator | 2025-11-11 00:47:25.867068 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-11-11 00:47:25.867079 | orchestrator | Tuesday 11 November 2025 00:43:31 +0000 (0:00:02.343) 0:00:04.378 ****** 2025-11-11 00:47:25.867089 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.867100 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.867119 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.867130 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.867141 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.867151 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.867162 | orchestrator | 2025-11-11 00:47:25.867188 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-11-11 00:47:25.867199 | orchestrator | Tuesday 11 November 2025 00:43:33 +0000 (0:00:01.211) 0:00:05.590 ****** 2025-11-11 00:47:25.867210 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.867221 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.867231 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.867242 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.867253 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.867263 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.867274 | orchestrator | 2025-11-11 00:47:25.867285 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-11-11 00:47:25.867295 | orchestrator | Tuesday 11 November 2025 00:43:34 +0000 (0:00:01.231) 0:00:06.821 ****** 2025-11-11 00:47:25.867306 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.867367 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.867378 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.867389 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.867399 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.867410 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.867421 | orchestrator | 2025-11-11 00:47:25.867432 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-11-11 00:47:25.867443 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.838) 0:00:07.660 ****** 2025-11-11 00:47:25.867454 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.867464 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.867475 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.867486 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.867496 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.867507 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.867518 | orchestrator | 2025-11-11 00:47:25.867529 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-11-11 00:47:25.867539 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.562) 0:00:08.222 ****** 2025-11-11 00:47:25.867582 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867594 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867605 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.867616 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867627 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867638 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.867649 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867667 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867686 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867704 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867722 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.867742 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867761 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867778 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.867817 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.867831 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-11 00:47:25.867851 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-11 00:47:25.867862 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.867873 | orchestrator | 2025-11-11 00:47:25.867884 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-11-11 00:47:25.867895 | orchestrator | Tuesday 11 November 2025 00:43:36 +0000 (0:00:00.815) 0:00:09.038 ****** 2025-11-11 00:47:25.867905 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.867916 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.867927 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.867937 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.867948 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.867959 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.867970 | orchestrator | 2025-11-11 00:47:25.867980 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-11-11 00:47:25.867992 | orchestrator | Tuesday 11 November 2025 00:43:38 +0000 (0:00:02.085) 0:00:11.123 ****** 2025-11-11 00:47:25.868003 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:47:25.868014 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:47:25.868024 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.868035 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.868045 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:47:25.868055 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.868066 | orchestrator | 2025-11-11 00:47:25.868077 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-11-11 00:47:25.868087 | orchestrator | Tuesday 11 November 2025 00:43:39 +0000 (0:00:00.726) 0:00:11.849 ****** 2025-11-11 00:47:25.868098 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.868108 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.868119 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.868129 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.868140 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.868150 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.868161 | orchestrator | 2025-11-11 00:47:25.868172 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-11-11 00:47:25.868182 | orchestrator | Tuesday 11 November 2025 00:43:45 +0000 (0:00:05.761) 0:00:17.610 ****** 2025-11-11 00:47:25.868193 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868203 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868214 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868225 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868235 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868255 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868266 | orchestrator | 2025-11-11 00:47:25.868277 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-11-11 00:47:25.868288 | orchestrator | Tuesday 11 November 2025 00:43:46 +0000 (0:00:00.943) 0:00:18.554 ****** 2025-11-11 00:47:25.868298 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868309 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868319 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868330 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868340 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868351 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868362 | orchestrator | 2025-11-11 00:47:25.868373 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-11-11 00:47:25.868386 | orchestrator | Tuesday 11 November 2025 00:43:47 +0000 (0:00:01.665) 0:00:20.219 ****** 2025-11-11 00:47:25.868396 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868407 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868417 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868428 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868438 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868449 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868467 | orchestrator | 2025-11-11 00:47:25.868478 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-11-11 00:47:25.868488 | orchestrator | Tuesday 11 November 2025 00:43:48 +0000 (0:00:00.740) 0:00:20.960 ****** 2025-11-11 00:47:25.868499 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-11-11 00:47:25.868510 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-11-11 00:47:25.868521 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868532 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-11-11 00:47:25.868542 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-11-11 00:47:25.868553 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868564 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-11-11 00:47:25.868579 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-11-11 00:47:25.868591 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868601 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-11-11 00:47:25.868612 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-11-11 00:47:25.868623 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868634 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-11-11 00:47:25.868644 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-11-11 00:47:25.868658 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868677 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-11-11 00:47:25.868694 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-11-11 00:47:25.868712 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868730 | orchestrator | 2025-11-11 00:47:25.868749 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-11-11 00:47:25.868768 | orchestrator | Tuesday 11 November 2025 00:43:49 +0000 (0:00:00.793) 0:00:21.753 ****** 2025-11-11 00:47:25.868787 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868822 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868833 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868844 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868854 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868865 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868875 | orchestrator | 2025-11-11 00:47:25.868886 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2025-11-11 00:47:25.868897 | orchestrator | Tuesday 11 November 2025 00:43:50 +0000 (0:00:00.648) 0:00:22.402 ****** 2025-11-11 00:47:25.868907 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.868917 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.868928 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.868938 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.868948 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.868959 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.868969 | orchestrator | 2025-11-11 00:47:25.868979 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-11-11 00:47:25.868990 | orchestrator | 2025-11-11 00:47:25.869001 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-11-11 00:47:25.869011 | orchestrator | Tuesday 11 November 2025 00:43:51 +0000 (0:00:01.075) 0:00:23.478 ****** 2025-11-11 00:47:25.869022 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.869032 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.869043 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.869053 | orchestrator | 2025-11-11 00:47:25.869064 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-11-11 00:47:25.869074 | orchestrator | Tuesday 11 November 2025 00:43:52 +0000 (0:00:00.933) 0:00:24.411 ****** 2025-11-11 00:47:25.869085 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.869095 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.869106 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.869124 | orchestrator | 2025-11-11 00:47:25.869135 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-11-11 00:47:25.869145 | orchestrator | Tuesday 11 November 2025 00:43:53 +0000 (0:00:01.133) 0:00:25.545 ****** 2025-11-11 00:47:25.869156 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.869166 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.869176 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.869187 | orchestrator | 2025-11-11 00:47:25.869198 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-11-11 00:47:25.869208 | orchestrator | Tuesday 11 November 2025 00:43:53 +0000 (0:00:00.789) 0:00:26.334 ****** 2025-11-11 00:47:25.869219 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.869229 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.869240 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.869250 | orchestrator | 2025-11-11 00:47:25.869261 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-11-11 00:47:25.869272 | orchestrator | Tuesday 11 November 2025 00:43:54 +0000 (0:00:00.712) 0:00:27.047 ****** 2025-11-11 00:47:25.869282 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.869300 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869311 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869322 | orchestrator | 2025-11-11 00:47:25.869333 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-11-11 00:47:25.869344 | orchestrator | Tuesday 11 November 2025 00:43:55 +0000 (0:00:00.661) 0:00:27.709 ****** 2025-11-11 00:47:25.869354 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.869365 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.869375 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.869386 | orchestrator | 2025-11-11 00:47:25.869396 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-11-11 00:47:25.869407 | orchestrator | Tuesday 11 November 2025 00:43:56 +0000 (0:00:00.760) 0:00:28.469 ****** 2025-11-11 00:47:25.869418 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.869428 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.869439 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.869449 | orchestrator | 2025-11-11 00:47:25.869460 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-11-11 00:47:25.869471 | orchestrator | Tuesday 11 November 2025 00:43:57 +0000 (0:00:01.186) 0:00:29.656 ****** 2025-11-11 00:47:25.869481 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:47:25.869492 | orchestrator | 2025-11-11 00:47:25.869503 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-11-11 00:47:25.869513 | orchestrator | Tuesday 11 November 2025 00:43:57 +0000 (0:00:00.457) 0:00:30.114 ****** 2025-11-11 00:47:25.869524 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.869535 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.869545 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.869556 | orchestrator | 2025-11-11 00:47:25.869567 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-11-11 00:47:25.869577 | orchestrator | Tuesday 11 November 2025 00:43:59 +0000 (0:00:01.706) 0:00:31.821 ****** 2025-11-11 00:47:25.869588 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869599 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869615 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.869626 | orchestrator | 2025-11-11 00:47:25.869637 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-11-11 00:47:25.869648 | orchestrator | Tuesday 11 November 2025 00:43:59 +0000 (0:00:00.491) 0:00:32.312 ****** 2025-11-11 00:47:25.869665 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869683 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869701 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.869718 | orchestrator | 2025-11-11 00:47:25.869738 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-11-11 00:47:25.869756 | orchestrator | Tuesday 11 November 2025 00:44:00 +0000 (0:00:00.862) 0:00:33.175 ****** 2025-11-11 00:47:25.869786 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869836 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869848 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.869858 | orchestrator | 2025-11-11 00:47:25.869869 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-11-11 00:47:25.869879 | orchestrator | Tuesday 11 November 2025 00:44:02 +0000 (0:00:01.272) 0:00:34.448 ****** 2025-11-11 00:47:25.869890 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.869900 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869911 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869921 | orchestrator | 2025-11-11 00:47:25.869932 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-11-11 00:47:25.869942 | orchestrator | Tuesday 11 November 2025 00:44:02 +0000 (0:00:00.301) 0:00:34.749 ****** 2025-11-11 00:47:25.869953 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.869964 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.869974 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.869984 | orchestrator | 2025-11-11 00:47:25.869995 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-11-11 00:47:25.870005 | orchestrator | Tuesday 11 November 2025 00:44:02 +0000 (0:00:00.504) 0:00:35.254 ****** 2025-11-11 00:47:25.870042 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.870053 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.870066 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.870077 | orchestrator | 2025-11-11 00:47:25.870087 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2025-11-11 00:47:25.870098 | orchestrator | Tuesday 11 November 2025 00:44:03 +0000 (0:00:01.051) 0:00:36.305 ****** 2025-11-11 00:47:25.870109 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.870119 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.870130 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.870140 | orchestrator | 2025-11-11 00:47:25.870151 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2025-11-11 00:47:25.870162 | orchestrator | Tuesday 11 November 2025 00:44:07 +0000 (0:00:03.163) 0:00:39.469 ****** 2025-11-11 00:47:25.870172 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.870183 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.870194 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.870204 | orchestrator | 2025-11-11 00:47:25.870215 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-11-11 00:47:25.870226 | orchestrator | Tuesday 11 November 2025 00:44:07 +0000 (0:00:00.555) 0:00:40.025 ****** 2025-11-11 00:47:25.870237 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-11 00:47:25.870248 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-11 00:47:25.870259 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-11 00:47:25.870270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-11 00:47:25.870289 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-11 00:47:25.870301 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-11 00:47:25.870312 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-11 00:47:25.870322 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-11 00:47:25.870341 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-11 00:47:25.870352 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-11 00:47:25.870362 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-11 00:47:25.870373 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-11 00:47:25.870383 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-11 00:47:25.870400 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-11 00:47:25.870410 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-11 00:47:25.870421 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.870432 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.870443 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.870454 | orchestrator | 2025-11-11 00:47:25.870465 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-11-11 00:47:25.870475 | orchestrator | Tuesday 11 November 2025 00:45:02 +0000 (0:00:54.419) 0:01:34.444 ****** 2025-11-11 00:47:25.870486 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.870501 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.870519 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.870536 | orchestrator | 2025-11-11 00:47:25.870555 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-11-11 00:47:25.870572 | orchestrator | Tuesday 11 November 2025 00:45:02 +0000 (0:00:00.316) 0:01:34.760 ****** 2025-11-11 00:47:25.870588 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.870606 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.870622 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.870640 | orchestrator | 2025-11-11 00:47:25.870657 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-11-11 00:47:25.870674 | orchestrator | Tuesday 11 November 2025 00:45:03 +0000 (0:00:01.151) 0:01:35.912 ****** 2025-11-11 00:47:25.870692 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.870709 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.870726 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.870743 | orchestrator | 2025-11-11 00:47:25.870761 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-11-11 00:47:25.870779 | orchestrator | Tuesday 11 November 2025 00:45:04 +0000 (0:00:01.206) 0:01:37.119 ****** 2025-11-11 00:47:25.870821 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.870841 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.870858 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.870875 | orchestrator | 2025-11-11 00:47:25.870892 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-11-11 00:47:25.870910 | orchestrator | Tuesday 11 November 2025 00:45:29 +0000 (0:00:24.696) 0:02:01.816 ****** 2025-11-11 00:47:25.870927 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.870944 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.870962 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.870980 | orchestrator | 2025-11-11 00:47:25.870999 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-11-11 00:47:25.871017 | orchestrator | Tuesday 11 November 2025 00:45:30 +0000 (0:00:00.629) 0:02:02.446 ****** 2025-11-11 00:47:25.871036 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.871055 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.871073 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.871108 | orchestrator | 2025-11-11 00:47:25.871127 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-11-11 00:47:25.871146 | orchestrator | Tuesday 11 November 2025 00:45:30 +0000 (0:00:00.665) 0:02:03.111 ****** 2025-11-11 00:47:25.871164 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.871183 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.871201 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.871220 | orchestrator | 2025-11-11 00:47:25.871238 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-11-11 00:47:25.871257 | orchestrator | Tuesday 11 November 2025 00:45:31 +0000 (0:00:00.605) 0:02:03.716 ****** 2025-11-11 00:47:25.871274 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.871293 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.871312 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.871331 | orchestrator | 2025-11-11 00:47:25.871349 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-11-11 00:47:25.871368 | orchestrator | Tuesday 11 November 2025 00:45:31 +0000 (0:00:00.573) 0:02:04.289 ****** 2025-11-11 00:47:25.871386 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.871403 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.871420 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.871438 | orchestrator | 2025-11-11 00:47:25.871471 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-11-11 00:47:25.871491 | orchestrator | Tuesday 11 November 2025 00:45:32 +0000 (0:00:00.647) 0:02:04.937 ****** 2025-11-11 00:47:25.871508 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.871526 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.871544 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.871561 | orchestrator | 2025-11-11 00:47:25.871579 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-11-11 00:47:25.871597 | orchestrator | Tuesday 11 November 2025 00:45:33 +0000 (0:00:00.633) 0:02:05.570 ****** 2025-11-11 00:47:25.871616 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.871635 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.871654 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.871671 | orchestrator | 2025-11-11 00:47:25.871689 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-11-11 00:47:25.871707 | orchestrator | Tuesday 11 November 2025 00:45:33 +0000 (0:00:00.619) 0:02:06.190 ****** 2025-11-11 00:47:25.871725 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.871742 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.871759 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.871777 | orchestrator | 2025-11-11 00:47:25.871867 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-11-11 00:47:25.871890 | orchestrator | Tuesday 11 November 2025 00:45:34 +0000 (0:00:00.816) 0:02:07.007 ****** 2025-11-11 00:47:25.871910 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:47:25.871929 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:47:25.871948 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:47:25.871967 | orchestrator | 2025-11-11 00:47:25.871985 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-11-11 00:47:25.872001 | orchestrator | Tuesday 11 November 2025 00:45:35 +0000 (0:00:01.033) 0:02:08.041 ****** 2025-11-11 00:47:25.872017 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.872032 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.872048 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.872063 | orchestrator | 2025-11-11 00:47:25.872080 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-11-11 00:47:25.872096 | orchestrator | Tuesday 11 November 2025 00:45:35 +0000 (0:00:00.269) 0:02:08.310 ****** 2025-11-11 00:47:25.872112 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.872127 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.872144 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.872159 | orchestrator | 2025-11-11 00:47:25.872176 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-11-11 00:47:25.872216 | orchestrator | Tuesday 11 November 2025 00:45:36 +0000 (0:00:00.264) 0:02:08.575 ****** 2025-11-11 00:47:25.872233 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.872248 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.872265 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.872280 | orchestrator | 2025-11-11 00:47:25.872297 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-11-11 00:47:25.872311 | orchestrator | Tuesday 11 November 2025 00:45:36 +0000 (0:00:00.600) 0:02:09.175 ****** 2025-11-11 00:47:25.872332 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.872342 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.872352 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.872361 | orchestrator | 2025-11-11 00:47:25.872372 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-11-11 00:47:25.872382 | orchestrator | Tuesday 11 November 2025 00:45:37 +0000 (0:00:00.863) 0:02:10.039 ****** 2025-11-11 00:47:25.872391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-11 00:47:25.872401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-11 00:47:25.872411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-11 00:47:25.872420 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-11 00:47:25.872429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-11 00:47:25.872439 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-11 00:47:25.872448 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-11 00:47:25.872458 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-11 00:47:25.872467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-11 00:47:25.872476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-11-11 00:47:25.872486 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-11 00:47:25.872495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-11-11 00:47:25.872504 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-11 00:47:25.872514 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-11 00:47:25.872523 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-11 00:47:25.872532 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-11 00:47:25.872542 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-11 00:47:25.872562 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-11 00:47:25.872572 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-11 00:47:25.872581 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-11 00:47:25.872591 | orchestrator | 2025-11-11 00:47:25.872600 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-11-11 00:47:25.872610 | orchestrator | 2025-11-11 00:47:25.872619 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-11-11 00:47:25.872629 | orchestrator | Tuesday 11 November 2025 00:45:40 +0000 (0:00:03.067) 0:02:13.107 ****** 2025-11-11 00:47:25.872638 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:47:25.872659 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:47:25.872676 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:47:25.872691 | orchestrator | 2025-11-11 00:47:25.872706 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-11-11 00:47:25.872723 | orchestrator | Tuesday 11 November 2025 00:45:41 +0000 (0:00:00.338) 0:02:13.446 ****** 2025-11-11 00:47:25.872740 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:47:25.872756 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:47:25.872772 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:47:25.872782 | orchestrator | 2025-11-11 00:47:25.872791 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-11-11 00:47:25.872828 | orchestrator | Tuesday 11 November 2025 00:45:41 +0000 (0:00:00.828) 0:02:14.274 ****** 2025-11-11 00:47:25.872838 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:47:25.872848 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:47:25.872857 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:47:25.872866 | orchestrator | 2025-11-11 00:47:25.872876 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-11-11 00:47:25.872885 | orchestrator | Tuesday 11 November 2025 00:45:42 +0000 (0:00:00.347) 0:02:14.621 ****** 2025-11-11 00:47:25.872901 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:47:25.872910 | orchestrator | 2025-11-11 00:47:25.872920 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-11-11 00:47:25.872929 | orchestrator | Tuesday 11 November 2025 00:45:42 +0000 (0:00:00.494) 0:02:15.115 ****** 2025-11-11 00:47:25.872939 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.872948 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.872957 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.872967 | orchestrator | 2025-11-11 00:47:25.872976 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-11-11 00:47:25.872985 | orchestrator | Tuesday 11 November 2025 00:45:43 +0000 (0:00:00.466) 0:02:15.581 ****** 2025-11-11 00:47:25.872995 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.873004 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.873014 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.873023 | orchestrator | 2025-11-11 00:47:25.873032 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-11-11 00:47:25.873042 | orchestrator | Tuesday 11 November 2025 00:45:43 +0000 (0:00:00.315) 0:02:15.897 ****** 2025-11-11 00:47:25.873051 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:47:25.873060 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:47:25.873070 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:47:25.873079 | orchestrator | 2025-11-11 00:47:25.873088 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-11-11 00:47:25.873097 | orchestrator | Tuesday 11 November 2025 00:45:43 +0000 (0:00:00.325) 0:02:16.223 ****** 2025-11-11 00:47:25.873107 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.873116 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.873125 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.873135 | orchestrator | 2025-11-11 00:47:25.873144 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-11-11 00:47:25.873153 | orchestrator | Tuesday 11 November 2025 00:45:44 +0000 (0:00:00.629) 0:02:16.852 ****** 2025-11-11 00:47:25.873162 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.873172 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.873181 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.873190 | orchestrator | 2025-11-11 00:47:25.873200 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-11-11 00:47:25.873209 | orchestrator | Tuesday 11 November 2025 00:45:45 +0000 (0:00:01.276) 0:02:18.129 ****** 2025-11-11 00:47:25.873218 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.873228 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.873237 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.873255 | orchestrator | 2025-11-11 00:47:25.873265 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-11-11 00:47:25.873274 | orchestrator | Tuesday 11 November 2025 00:45:46 +0000 (0:00:01.242) 0:02:19.372 ****** 2025-11-11 00:47:25.873284 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:47:25.873293 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:47:25.873303 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:47:25.873312 | orchestrator | 2025-11-11 00:47:25.873321 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-11 00:47:25.873331 | orchestrator | 2025-11-11 00:47:25.873340 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-11 00:47:25.873350 | orchestrator | Tuesday 11 November 2025 00:45:57 +0000 (0:00:10.223) 0:02:29.595 ****** 2025-11-11 00:47:25.873359 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.873368 | orchestrator | 2025-11-11 00:47:25.873378 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-11 00:47:25.873387 | orchestrator | Tuesday 11 November 2025 00:45:57 +0000 (0:00:00.743) 0:02:30.338 ****** 2025-11-11 00:47:25.873397 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.873406 | orchestrator | 2025-11-11 00:47:25.873416 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-11 00:47:25.873425 | orchestrator | Tuesday 11 November 2025 00:45:58 +0000 (0:00:00.604) 0:02:30.943 ****** 2025-11-11 00:47:25.873434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-11 00:47:25.873444 | orchestrator | 2025-11-11 00:47:25.873460 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-11 00:47:25.873470 | orchestrator | Tuesday 11 November 2025 00:45:59 +0000 (0:00:00.556) 0:02:31.499 ****** 2025-11-11 00:47:25.873480 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.873489 | orchestrator | 2025-11-11 00:47:25.873499 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-11 00:47:25.873508 | orchestrator | Tuesday 11 November 2025 00:45:59 +0000 (0:00:00.813) 0:02:32.313 ****** 2025-11-11 00:47:25.873518 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.873527 | orchestrator | 2025-11-11 00:47:25.873536 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-11 00:47:25.873546 | orchestrator | Tuesday 11 November 2025 00:46:00 +0000 (0:00:00.535) 0:02:32.849 ****** 2025-11-11 00:47:25.873556 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-11 00:47:25.873565 | orchestrator | 2025-11-11 00:47:25.873575 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-11 00:47:25.873584 | orchestrator | Tuesday 11 November 2025 00:46:01 +0000 (0:00:01.491) 0:02:34.340 ****** 2025-11-11 00:47:25.873594 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-11 00:47:25.873603 | orchestrator | 2025-11-11 00:47:25.873613 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-11 00:47:25.873622 | orchestrator | Tuesday 11 November 2025 00:46:02 +0000 (0:00:00.808) 0:02:35.148 ****** 2025-11-11 00:47:25.873631 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.873641 | orchestrator | 2025-11-11 00:47:25.873651 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-11 00:47:25.873668 | orchestrator | Tuesday 11 November 2025 00:46:03 +0000 (0:00:00.421) 0:02:35.570 ****** 2025-11-11 00:47:25.873683 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.873698 | orchestrator | 2025-11-11 00:47:25.873715 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-11-11 00:47:25.873732 | orchestrator | 2025-11-11 00:47:25.873748 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-11-11 00:47:25.873766 | orchestrator | Tuesday 11 November 2025 00:46:03 +0000 (0:00:00.448) 0:02:36.018 ****** 2025-11-11 00:47:25.873776 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.873786 | orchestrator | 2025-11-11 00:47:25.873847 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-11-11 00:47:25.873868 | orchestrator | Tuesday 11 November 2025 00:46:03 +0000 (0:00:00.151) 0:02:36.169 ****** 2025-11-11 00:47:25.873877 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-11-11 00:47:25.873887 | orchestrator | 2025-11-11 00:47:25.873896 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-11-11 00:47:25.873906 | orchestrator | Tuesday 11 November 2025 00:46:04 +0000 (0:00:00.404) 0:02:36.574 ****** 2025-11-11 00:47:25.873916 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.873925 | orchestrator | 2025-11-11 00:47:25.873935 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-11-11 00:47:25.873944 | orchestrator | Tuesday 11 November 2025 00:46:04 +0000 (0:00:00.795) 0:02:37.369 ****** 2025-11-11 00:47:25.873953 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.873963 | orchestrator | 2025-11-11 00:47:25.873972 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-11-11 00:47:25.873982 | orchestrator | Tuesday 11 November 2025 00:46:06 +0000 (0:00:01.637) 0:02:39.006 ****** 2025-11-11 00:47:25.873991 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.874001 | orchestrator | 2025-11-11 00:47:25.874010 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-11-11 00:47:25.874080 | orchestrator | Tuesday 11 November 2025 00:46:07 +0000 (0:00:00.880) 0:02:39.887 ****** 2025-11-11 00:47:25.874091 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.874101 | orchestrator | 2025-11-11 00:47:25.874110 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-11-11 00:47:25.874120 | orchestrator | Tuesday 11 November 2025 00:46:07 +0000 (0:00:00.460) 0:02:40.347 ****** 2025-11-11 00:47:25.874129 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.874139 | orchestrator | 2025-11-11 00:47:25.874148 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-11-11 00:47:25.874158 | orchestrator | Tuesday 11 November 2025 00:46:15 +0000 (0:00:07.331) 0:02:47.679 ****** 2025-11-11 00:47:25.874167 | orchestrator | changed: [testbed-manager] 2025-11-11 00:47:25.874177 | orchestrator | 2025-11-11 00:47:25.874186 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-11-11 00:47:25.874196 | orchestrator | Tuesday 11 November 2025 00:46:28 +0000 (0:00:13.696) 0:03:01.376 ****** 2025-11-11 00:47:25.874205 | orchestrator | ok: [testbed-manager] 2025-11-11 00:47:25.874215 | orchestrator | 2025-11-11 00:47:25.874224 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-11-11 00:47:25.874234 | orchestrator | 2025-11-11 00:47:25.874243 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-11-11 00:47:25.874253 | orchestrator | Tuesday 11 November 2025 00:46:29 +0000 (0:00:00.681) 0:03:02.058 ****** 2025-11-11 00:47:25.874263 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:47:25.874272 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:47:25.874282 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:47:25.874291 | orchestrator | 2025-11-11 00:47:25.874301 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-11-11 00:47:25.874311 | orchestrator | Tuesday 11 November 2025 00:46:29 +0000 (0:00:00.295) 0:03:02.353 ****** 2025-11-11 00:47:25.874320 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874330 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:47:25.874340 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:47:25.874349 | orchestrator | 2025-11-11 00:47:25.874359 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-11-11 00:47:25.874368 | orchestrator | Tuesday 11 November 2025 00:46:30 +0000 (0:00:00.286) 0:03:02.639 ****** 2025-11-11 00:47:25.874376 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:47:25.874384 | orchestrator | 2025-11-11 00:47:25.874391 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-11-11 00:47:25.874406 | orchestrator | Tuesday 11 November 2025 00:46:30 +0000 (0:00:00.500) 0:03:03.140 ****** 2025-11-11 00:47:25.874420 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-11 00:47:25.874428 | orchestrator | 2025-11-11 00:47:25.874436 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-11-11 00:47:25.874443 | orchestrator | Tuesday 11 November 2025 00:46:31 +0000 (0:00:00.952) 0:03:04.092 ****** 2025-11-11 00:47:25.874451 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874459 | orchestrator | 2025-11-11 00:47:25.874467 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-11-11 00:47:25.874475 | orchestrator | Tuesday 11 November 2025 00:46:31 +0000 (0:00:00.124) 0:03:04.217 ****** 2025-11-11 00:47:25.874483 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-11 00:47:25.874490 | orchestrator | 2025-11-11 00:47:25.874498 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-11-11 00:47:25.874506 | orchestrator | Tuesday 11 November 2025 00:46:32 +0000 (0:00:00.928) 0:03:05.146 ****** 2025-11-11 00:47:25.874514 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874522 | orchestrator | 2025-11-11 00:47:25.874529 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-11-11 00:47:25.874537 | orchestrator | Tuesday 11 November 2025 00:46:32 +0000 (0:00:00.117) 0:03:05.263 ****** 2025-11-11 00:47:25.874545 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874553 | orchestrator | 2025-11-11 00:47:25.874560 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-11-11 00:47:25.874568 | orchestrator | Tuesday 11 November 2025 00:46:33 +0000 (0:00:00.130) 0:03:05.394 ****** 2025-11-11 00:47:25.874576 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874584 | orchestrator | 2025-11-11 00:47:25.874592 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-11-11 00:47:25.874599 | orchestrator | Tuesday 11 November 2025 00:46:33 +0000 (0:00:00.138) 0:03:05.533 ****** 2025-11-11 00:47:25.874607 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:47:25.874615 | orchestrator | 2025-11-11 00:47:25.874627 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-11-11 00:47:25.874635 | orchestrator | Tuesday 11 November 2025 00:46:33 +0000 (0:00:00.111) 0:03:05.644 ****** 2025-11-11 00:47:25.874643 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-11 00:47:25.874652 | orchestrator | 2025-11-11 00:47:25.874666 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-11-11 00:47:25.874680 | orchestrator | Tuesday 11 November 2025 00:46:37 +0000 (0:00:04.650) 0:03:10.294 ****** 2025-11-11 00:47:25.874693 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-11-11 00:47:25.874712 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-11-11 00:47:25.874725 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-11-11 00:47:25.874740 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-11-11 00:47:25.874753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-11-11 00:47:25.874767 | orchestrator | 2025-11-11 00:47:25.874775 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-11-11 00:47:25.874783 | orchestrator | Tuesday 11 November 2025 00:47:19 +0000 (0:00:42.026) 0:03:52.320 ****** 2025-11-11 00:47:25.874791 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-11 00:47:25.874817 | orchestrator | 2025-11-11 00:47:25.874826 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-11-11 00:47:25.874834 | orchestrator | Tuesday 11 November 2025 00:47:21 +0000 (0:00:01.169) 0:03:53.490 ****** 2025-11-11 00:47:25.874842 | orchestrator | fatal: [testbed-node-0 -> localhost]: FAILED! => {"changed": false, "checksum": "e067333911ec303b1abbababa17374a0629c5a29", "msg": "Destination directory /tmp/k3s does not exist"} 2025-11-11 00:47:25.874851 | orchestrator | 2025-11-11 00:47:25.874859 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:47:25.874873 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:47:25.874881 | orchestrator | testbed-node-0 : ok=43  changed=20  unreachable=0 failed=1  skipped=24  rescued=0 ignored=0 2025-11-11 00:47:25.874890 | orchestrator | testbed-node-1 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-11 00:47:25.874898 | orchestrator | testbed-node-2 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-11 00:47:25.874906 | orchestrator | testbed-node-3 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-11 00:47:25.874914 | orchestrator | testbed-node-4 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-11 00:47:25.874922 | orchestrator | testbed-node-5 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-11 00:47:25.874929 | orchestrator | 2025-11-11 00:47:25.874937 | orchestrator | 2025-11-11 00:47:25.874945 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:47:25.874953 | orchestrator | Tuesday 11 November 2025 00:47:22 +0000 (0:00:01.393) 0:03:54.883 ****** 2025-11-11 00:47:25.874966 | orchestrator | =============================================================================== 2025-11-11 00:47:25.874974 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.42s 2025-11-11 00:47:25.874982 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.03s 2025-11-11 00:47:25.874990 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.70s 2025-11-11 00:47:25.874997 | orchestrator | kubectl : Install required packages ------------------------------------ 13.70s 2025-11-11 00:47:25.875005 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.22s 2025-11-11 00:47:25.875013 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.33s 2025-11-11 00:47:25.875021 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.76s 2025-11-11 00:47:25.875029 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.65s 2025-11-11 00:47:25.875036 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.16s 2025-11-11 00:47:25.875044 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2025-11-11 00:47:25.875052 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.34s 2025-11-11 00:47:25.875060 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.09s 2025-11-11 00:47:25.875068 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.71s 2025-11-11 00:47:25.875075 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.67s 2025-11-11 00:47:25.875083 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.64s 2025-11-11 00:47:25.875095 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2025-11-11 00:47:25.875103 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.39s 2025-11-11 00:47:25.875111 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.28s 2025-11-11 00:47:25.875118 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.27s 2025-11-11 00:47:25.875126 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.24s 2025-11-11 00:47:25.875134 | orchestrator | 2025-11-11 00:47:25 | INFO  | Task 2ec72b67-9c56-45d8-8797-bd6a282144f5 is in state STARTED 2025-11-11 00:47:25.875148 | orchestrator | 2025-11-11 00:47:25 | INFO  | Task 2a16bcc7-d614-430e-9b5c-67b6487b68af is in state STARTED 2025-11-11 00:47:25.875156 | orchestrator | 2025-11-11 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:28.917656 | orchestrator | 2025-11-11 00:47:28 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:28.923098 | orchestrator | 2025-11-11 00:47:28 | INFO  | Task 2ec72b67-9c56-45d8-8797-bd6a282144f5 is in state STARTED 2025-11-11 00:47:28.929304 | orchestrator | 2025-11-11 00:47:28 | INFO  | Task 2a16bcc7-d614-430e-9b5c-67b6487b68af is in state STARTED 2025-11-11 00:47:28.930925 | orchestrator | 2025-11-11 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:31.962227 | orchestrator | 2025-11-11 00:47:31 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:31.965646 | orchestrator | 2025-11-11 00:47:31 | INFO  | Task 2ec72b67-9c56-45d8-8797-bd6a282144f5 is in state STARTED 2025-11-11 00:47:31.967787 | orchestrator | 2025-11-11 00:47:31 | INFO  | Task 2a16bcc7-d614-430e-9b5c-67b6487b68af is in state SUCCESS 2025-11-11 00:47:31.968461 | orchestrator | 2025-11-11 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:35.011020 | orchestrator | 2025-11-11 00:47:35 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:35.011162 | orchestrator | 2025-11-11 00:47:35 | INFO  | Task 2ec72b67-9c56-45d8-8797-bd6a282144f5 is in state SUCCESS 2025-11-11 00:47:35.011182 | orchestrator | 2025-11-11 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:38.051977 | orchestrator | 2025-11-11 00:47:38 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:38.052110 | orchestrator | 2025-11-11 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:41.085411 | orchestrator | 2025-11-11 00:47:41 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:41.085521 | orchestrator | 2025-11-11 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:44.129307 | orchestrator | 2025-11-11 00:47:44 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:44.129435 | orchestrator | 2025-11-11 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:47.175650 | orchestrator | 2025-11-11 00:47:47 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:47.175755 | orchestrator | 2025-11-11 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:50.212192 | orchestrator | 2025-11-11 00:47:50 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:50.212312 | orchestrator | 2025-11-11 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:53.256372 | orchestrator | 2025-11-11 00:47:53 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:53.256482 | orchestrator | 2025-11-11 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:56.295391 | orchestrator | 2025-11-11 00:47:56 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:56.295501 | orchestrator | 2025-11-11 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:47:59.329519 | orchestrator | 2025-11-11 00:47:59 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:47:59.329648 | orchestrator | 2025-11-11 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:02.373692 | orchestrator | 2025-11-11 00:48:02 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:02.373880 | orchestrator | 2025-11-11 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:05.436300 | orchestrator | 2025-11-11 00:48:05 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:05.436447 | orchestrator | 2025-11-11 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:08.488000 | orchestrator | 2025-11-11 00:48:08 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:08.488146 | orchestrator | 2025-11-11 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:11.530343 | orchestrator | 2025-11-11 00:48:11 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:11.530463 | orchestrator | 2025-11-11 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:14.578369 | orchestrator | 2025-11-11 00:48:14 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:14.578980 | orchestrator | 2025-11-11 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:17.624868 | orchestrator | 2025-11-11 00:48:17 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:17.625038 | orchestrator | 2025-11-11 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:20.664459 | orchestrator | 2025-11-11 00:48:20 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:20.664601 | orchestrator | 2025-11-11 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:23.701810 | orchestrator | 2025-11-11 00:48:23 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:23.701941 | orchestrator | 2025-11-11 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:26.756621 | orchestrator | 2025-11-11 00:48:26 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:26.756765 | orchestrator | 2025-11-11 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:29.797700 | orchestrator | 2025-11-11 00:48:29 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:29.797817 | orchestrator | 2025-11-11 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:32.835883 | orchestrator | 2025-11-11 00:48:32 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:32.836070 | orchestrator | 2025-11-11 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:35.874706 | orchestrator | 2025-11-11 00:48:35 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:35.874823 | orchestrator | 2025-11-11 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:38.911723 | orchestrator | 2025-11-11 00:48:38 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:38.911861 | orchestrator | 2025-11-11 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:41.949721 | orchestrator | 2025-11-11 00:48:41 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:41.949907 | orchestrator | 2025-11-11 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:44.992517 | orchestrator | 2025-11-11 00:48:44 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:44.992636 | orchestrator | 2025-11-11 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:48.030815 | orchestrator | 2025-11-11 00:48:48 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:48.030961 | orchestrator | 2025-11-11 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:51.067318 | orchestrator | 2025-11-11 00:48:51 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:51.067409 | orchestrator | 2025-11-11 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:54.103241 | orchestrator | 2025-11-11 00:48:54 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:54.103326 | orchestrator | 2025-11-11 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:48:57.138960 | orchestrator | 2025-11-11 00:48:57 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:48:57.139082 | orchestrator | 2025-11-11 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:00.181758 | orchestrator | 2025-11-11 00:49:00 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:00.181851 | orchestrator | 2025-11-11 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:03.227369 | orchestrator | 2025-11-11 00:49:03 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:03.227464 | orchestrator | 2025-11-11 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:06.269847 | orchestrator | 2025-11-11 00:49:06 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:06.269937 | orchestrator | 2025-11-11 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:09.303584 | orchestrator | 2025-11-11 00:49:09 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:09.303701 | orchestrator | 2025-11-11 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:12.342722 | orchestrator | 2025-11-11 00:49:12 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:12.342816 | orchestrator | 2025-11-11 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:15.389539 | orchestrator | 2025-11-11 00:49:15 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:15.389621 | orchestrator | 2025-11-11 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:18.439924 | orchestrator | 2025-11-11 00:49:18 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:18.440012 | orchestrator | 2025-11-11 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:21.492954 | orchestrator | 2025-11-11 00:49:21 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:21.493053 | orchestrator | 2025-11-11 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:24.536687 | orchestrator | 2025-11-11 00:49:24 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:24.536775 | orchestrator | 2025-11-11 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:27.573254 | orchestrator | 2025-11-11 00:49:27 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:27.573344 | orchestrator | 2025-11-11 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:30.608286 | orchestrator | 2025-11-11 00:49:30 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:30.608375 | orchestrator | 2025-11-11 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:33.648316 | orchestrator | 2025-11-11 00:49:33 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:33.648438 | orchestrator | 2025-11-11 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:36.689795 | orchestrator | 2025-11-11 00:49:36 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:36.689889 | orchestrator | 2025-11-11 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:39.730922 | orchestrator | 2025-11-11 00:49:39 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:39.731040 | orchestrator | 2025-11-11 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:42.773883 | orchestrator | 2025-11-11 00:49:42 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:42.773971 | orchestrator | 2025-11-11 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:45.814644 | orchestrator | 2025-11-11 00:49:45 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:45.814761 | orchestrator | 2025-11-11 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:48.853852 | orchestrator | 2025-11-11 00:49:48 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:48.853943 | orchestrator | 2025-11-11 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:51.898837 | orchestrator | 2025-11-11 00:49:51 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:51.898912 | orchestrator | 2025-11-11 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:54.937437 | orchestrator | 2025-11-11 00:49:54 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:54.937525 | orchestrator | 2025-11-11 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:49:57.975705 | orchestrator | 2025-11-11 00:49:57 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:49:57.975804 | orchestrator | 2025-11-11 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:01.022848 | orchestrator | 2025-11-11 00:50:01 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:01.022955 | orchestrator | 2025-11-11 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:04.065811 | orchestrator | 2025-11-11 00:50:04 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:04.065931 | orchestrator | 2025-11-11 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:07.108829 | orchestrator | 2025-11-11 00:50:07 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:07.108884 | orchestrator | 2025-11-11 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:10.153132 | orchestrator | 2025-11-11 00:50:10 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:10.153271 | orchestrator | 2025-11-11 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:13.190468 | orchestrator | 2025-11-11 00:50:13 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:13.190557 | orchestrator | 2025-11-11 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:16.233350 | orchestrator | 2025-11-11 00:50:16 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:16.233448 | orchestrator | 2025-11-11 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:19.277506 | orchestrator | 2025-11-11 00:50:19 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:19.277595 | orchestrator | 2025-11-11 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:22.316364 | orchestrator | 2025-11-11 00:50:22 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:22.316451 | orchestrator | 2025-11-11 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:25.358763 | orchestrator | 2025-11-11 00:50:25 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:25.358854 | orchestrator | 2025-11-11 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:28.402249 | orchestrator | 2025-11-11 00:50:28 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:28.402387 | orchestrator | 2025-11-11 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:31.439948 | orchestrator | 2025-11-11 00:50:31 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:31.440052 | orchestrator | 2025-11-11 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:34.475348 | orchestrator | 2025-11-11 00:50:34 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:34.475470 | orchestrator | 2025-11-11 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:37.520656 | orchestrator | 2025-11-11 00:50:37 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:37.520789 | orchestrator | 2025-11-11 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:40.556135 | orchestrator | 2025-11-11 00:50:40 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:40.556327 | orchestrator | 2025-11-11 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:43.592448 | orchestrator | 2025-11-11 00:50:43 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:43.592565 | orchestrator | 2025-11-11 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:46.635359 | orchestrator | 2025-11-11 00:50:46 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:46.635478 | orchestrator | 2025-11-11 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:49.682463 | orchestrator | 2025-11-11 00:50:49 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:49.682579 | orchestrator | 2025-11-11 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:52.722731 | orchestrator | 2025-11-11 00:50:52 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:52.722851 | orchestrator | 2025-11-11 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:55.766143 | orchestrator | 2025-11-11 00:50:55 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:55.766250 | orchestrator | 2025-11-11 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:50:58.813371 | orchestrator | 2025-11-11 00:50:58 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:50:58.813489 | orchestrator | 2025-11-11 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:01.853893 | orchestrator | 2025-11-11 00:51:01 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:01.854086 | orchestrator | 2025-11-11 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:04.897360 | orchestrator | 2025-11-11 00:51:04 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:04.897482 | orchestrator | 2025-11-11 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:07.936983 | orchestrator | 2025-11-11 00:51:07 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:07.937089 | orchestrator | 2025-11-11 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:10.978999 | orchestrator | 2025-11-11 00:51:10 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:10.979128 | orchestrator | 2025-11-11 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:14.016091 | orchestrator | 2025-11-11 00:51:14 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:14.016208 | orchestrator | 2025-11-11 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:17.060945 | orchestrator | 2025-11-11 00:51:17 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:17.061076 | orchestrator | 2025-11-11 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:20.103556 | orchestrator | 2025-11-11 00:51:20 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:20.103677 | orchestrator | 2025-11-11 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:23.151776 | orchestrator | 2025-11-11 00:51:23 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:23.151907 | orchestrator | 2025-11-11 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:26.206083 | orchestrator | 2025-11-11 00:51:26 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:26.206211 | orchestrator | 2025-11-11 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:29.252411 | orchestrator | 2025-11-11 00:51:29 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:29.252613 | orchestrator | 2025-11-11 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:32.293120 | orchestrator | 2025-11-11 00:51:32 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:32.293258 | orchestrator | 2025-11-11 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:35.338424 | orchestrator | 2025-11-11 00:51:35 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:35.338550 | orchestrator | 2025-11-11 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:38.378245 | orchestrator | 2025-11-11 00:51:38 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:38.378387 | orchestrator | 2025-11-11 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:41.423727 | orchestrator | 2025-11-11 00:51:41 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:41.423879 | orchestrator | 2025-11-11 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:44.466424 | orchestrator | 2025-11-11 00:51:44 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:44.466551 | orchestrator | 2025-11-11 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:47.509629 | orchestrator | 2025-11-11 00:51:47 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:47.509775 | orchestrator | 2025-11-11 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:50.554611 | orchestrator | 2025-11-11 00:51:50 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:50.554804 | orchestrator | 2025-11-11 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:53.602959 | orchestrator | 2025-11-11 00:51:53 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:53.603078 | orchestrator | 2025-11-11 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:56.644516 | orchestrator | 2025-11-11 00:51:56 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:56.644652 | orchestrator | 2025-11-11 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:51:59.680247 | orchestrator | 2025-11-11 00:51:59 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:51:59.680425 | orchestrator | 2025-11-11 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:02.724468 | orchestrator | 2025-11-11 00:52:02 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:02.724571 | orchestrator | 2025-11-11 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:05.765015 | orchestrator | 2025-11-11 00:52:05 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:05.765143 | orchestrator | 2025-11-11 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:08.809471 | orchestrator | 2025-11-11 00:52:08 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:08.809562 | orchestrator | 2025-11-11 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:11.847285 | orchestrator | 2025-11-11 00:52:11 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:11.847480 | orchestrator | 2025-11-11 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:14.888703 | orchestrator | 2025-11-11 00:52:14 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:14.888829 | orchestrator | 2025-11-11 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:17.930423 | orchestrator | 2025-11-11 00:52:17 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:17.930600 | orchestrator | 2025-11-11 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:20.971577 | orchestrator | 2025-11-11 00:52:20 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:20.971697 | orchestrator | 2025-11-11 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:24.012671 | orchestrator | 2025-11-11 00:52:24 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:24.012824 | orchestrator | 2025-11-11 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:27.051399 | orchestrator | 2025-11-11 00:52:27 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:27.051491 | orchestrator | 2025-11-11 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:30.100284 | orchestrator | 2025-11-11 00:52:30 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:30.100453 | orchestrator | 2025-11-11 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:33.143777 | orchestrator | 2025-11-11 00:52:33 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:33.143917 | orchestrator | 2025-11-11 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:36.185974 | orchestrator | 2025-11-11 00:52:36 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:36.186155 | orchestrator | 2025-11-11 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:39.227856 | orchestrator | 2025-11-11 00:52:39 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:39.227966 | orchestrator | 2025-11-11 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:42.270178 | orchestrator | 2025-11-11 00:52:42 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:42.270310 | orchestrator | 2025-11-11 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:45.310690 | orchestrator | 2025-11-11 00:52:45 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:45.310808 | orchestrator | 2025-11-11 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:48.354804 | orchestrator | 2025-11-11 00:52:48 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:48.354933 | orchestrator | 2025-11-11 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:51.390878 | orchestrator | 2025-11-11 00:52:51 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:51.391009 | orchestrator | 2025-11-11 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:54.431727 | orchestrator | 2025-11-11 00:52:54 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:54.431889 | orchestrator | 2025-11-11 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:52:57.475146 | orchestrator | 2025-11-11 00:52:57 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:52:57.475257 | orchestrator | 2025-11-11 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:00.512842 | orchestrator | 2025-11-11 00:53:00 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:00.512956 | orchestrator | 2025-11-11 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:03.557095 | orchestrator | 2025-11-11 00:53:03 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:03.557749 | orchestrator | 2025-11-11 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:06.603919 | orchestrator | 2025-11-11 00:53:06 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:06.604045 | orchestrator | 2025-11-11 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:09.646458 | orchestrator | 2025-11-11 00:53:09 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:09.646591 | orchestrator | 2025-11-11 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:12.691979 | orchestrator | 2025-11-11 00:53:12 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:12.692108 | orchestrator | 2025-11-11 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:15.742842 | orchestrator | 2025-11-11 00:53:15 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:15.742963 | orchestrator | 2025-11-11 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:18.790876 | orchestrator | 2025-11-11 00:53:18 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:18.791030 | orchestrator | 2025-11-11 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:21.835294 | orchestrator | 2025-11-11 00:53:21 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:21.835500 | orchestrator | 2025-11-11 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:24.882326 | orchestrator | 2025-11-11 00:53:24 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:24.882502 | orchestrator | 2025-11-11 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:27.922172 | orchestrator | 2025-11-11 00:53:27 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:27.922289 | orchestrator | 2025-11-11 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:30.956811 | orchestrator | 2025-11-11 00:53:30 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:30.956926 | orchestrator | 2025-11-11 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:34.001132 | orchestrator | 2025-11-11 00:53:33 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:34.001276 | orchestrator | 2025-11-11 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:37.044084 | orchestrator | 2025-11-11 00:53:37 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:37.044190 | orchestrator | 2025-11-11 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:40.093011 | orchestrator | 2025-11-11 00:53:40 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state STARTED 2025-11-11 00:53:40.093839 | orchestrator | 2025-11-11 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:43.128989 | orchestrator | 2025-11-11 00:53:43 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:43.134894 | orchestrator | 2025-11-11 00:53:43 | INFO  | Task dfc906e1-9d5a-4acb-b350-780bf7ae5d19 is in state SUCCESS 2025-11-11 00:53:43.135333 | orchestrator | 2025-11-11 00:53:43.135362 | orchestrator | 2025-11-11 00:53:43.135372 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-11-11 00:53:43.135383 | orchestrator | 2025-11-11 00:53:43.135423 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-11 00:53:43.135433 | orchestrator | Tuesday 11 November 2025 00:47:26 +0000 (0:00:00.151) 0:00:00.151 ****** 2025-11-11 00:53:43.135442 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.135450 | orchestrator | 2025-11-11 00:53:43.135459 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-11 00:53:43.135467 | orchestrator | Tuesday 11 November 2025 00:47:27 +0000 (0:00:00.686) 0:00:00.838 ****** 2025-11-11 00:53:43.135475 | orchestrator | changed: [testbed-manager] 2025-11-11 00:53:43.135484 | orchestrator | 2025-11-11 00:53:43.135507 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-11-11 00:53:43.135515 | orchestrator | Tuesday 11 November 2025 00:47:28 +0000 (0:00:01.133) 0:00:01.971 ****** 2025-11-11 00:53:43.135523 | orchestrator | changed: [testbed-manager] 2025-11-11 00:53:43.135531 | orchestrator | 2025-11-11 00:53:43.135539 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:53:43.135547 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:53:43.135558 | orchestrator | 2025-11-11 00:53:43.135566 | orchestrator | 2025-11-11 00:53:43.135574 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:53:43.135582 | orchestrator | Tuesday 11 November 2025 00:47:29 +0000 (0:00:00.459) 0:00:02.430 ****** 2025-11-11 00:53:43.135590 | orchestrator | =============================================================================== 2025-11-11 00:53:43.135619 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2025-11-11 00:53:43.135627 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-11-11 00:53:43.135635 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.46s 2025-11-11 00:53:43.135643 | orchestrator | 2025-11-11 00:53:43.135650 | orchestrator | 2025-11-11 00:53:43.135658 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-11 00:53:43.135666 | orchestrator | 2025-11-11 00:53:43.135674 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-11 00:53:43.135681 | orchestrator | Tuesday 11 November 2025 00:47:26 +0000 (0:00:00.154) 0:00:00.154 ****** 2025-11-11 00:53:43.135689 | orchestrator | ok: [testbed-manager] 2025-11-11 00:53:43.135699 | orchestrator | 2025-11-11 00:53:43.135706 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-11 00:53:43.135715 | orchestrator | Tuesday 11 November 2025 00:47:27 +0000 (0:00:00.553) 0:00:00.708 ****** 2025-11-11 00:53:43.135723 | orchestrator | ok: [testbed-manager] 2025-11-11 00:53:43.135730 | orchestrator | 2025-11-11 00:53:43.135738 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-11 00:53:43.135746 | orchestrator | Tuesday 11 November 2025 00:47:28 +0000 (0:00:00.584) 0:00:01.293 ****** 2025-11-11 00:53:43.135754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.135761 | orchestrator | 2025-11-11 00:53:43.135769 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-11 00:53:43.135777 | orchestrator | Tuesday 11 November 2025 00:47:28 +0000 (0:00:00.712) 0:00:02.005 ****** 2025-11-11 00:53:43.135784 | orchestrator | changed: [testbed-manager] 2025-11-11 00:53:43.135792 | orchestrator | 2025-11-11 00:53:43.135800 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-11 00:53:43.135807 | orchestrator | Tuesday 11 November 2025 00:47:30 +0000 (0:00:01.412) 0:00:03.418 ****** 2025-11-11 00:53:43.135815 | orchestrator | changed: [testbed-manager] 2025-11-11 00:53:43.135823 | orchestrator | 2025-11-11 00:53:43.135830 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-11 00:53:43.135838 | orchestrator | Tuesday 11 November 2025 00:47:30 +0000 (0:00:00.531) 0:00:03.950 ****** 2025-11-11 00:53:43.135846 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-11 00:53:43.135854 | orchestrator | 2025-11-11 00:53:43.135862 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-11 00:53:43.135869 | orchestrator | Tuesday 11 November 2025 00:47:32 +0000 (0:00:01.536) 0:00:05.486 ****** 2025-11-11 00:53:43.135877 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-11 00:53:43.135885 | orchestrator | 2025-11-11 00:53:43.135893 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-11 00:53:43.135900 | orchestrator | Tuesday 11 November 2025 00:47:33 +0000 (0:00:00.798) 0:00:06.285 ****** 2025-11-11 00:53:43.135908 | orchestrator | ok: [testbed-manager] 2025-11-11 00:53:43.135916 | orchestrator | 2025-11-11 00:53:43.135923 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-11 00:53:43.135931 | orchestrator | Tuesday 11 November 2025 00:47:33 +0000 (0:00:00.452) 0:00:06.738 ****** 2025-11-11 00:53:43.135939 | orchestrator | ok: [testbed-manager] 2025-11-11 00:53:43.135946 | orchestrator | 2025-11-11 00:53:43.135954 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:53:43.135965 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:53:43.135974 | orchestrator | 2025-11-11 00:53:43.135983 | orchestrator | 2025-11-11 00:53:43.135992 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:53:43.136001 | orchestrator | Tuesday 11 November 2025 00:47:33 +0000 (0:00:00.342) 0:00:07.081 ****** 2025-11-11 00:53:43.136010 | orchestrator | =============================================================================== 2025-11-11 00:53:43.136025 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.54s 2025-11-11 00:53:43.136034 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2025-11-11 00:53:43.136043 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.80s 2025-11-11 00:53:43.136061 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2025-11-11 00:53:43.136071 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2025-11-11 00:53:43.136080 | orchestrator | Get home directory of operator user ------------------------------------- 0.55s 2025-11-11 00:53:43.136089 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.53s 2025-11-11 00:53:43.136098 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2025-11-11 00:53:43.136106 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2025-11-11 00:53:43.136115 | orchestrator | 2025-11-11 00:53:43.138543 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-11 00:53:43.138584 | orchestrator | 2.16.14 2025-11-11 00:53:43.138593 | orchestrator | 2025-11-11 00:53:43.138605 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-11-11 00:53:43.138613 | orchestrator | 2025-11-11 00:53:43.138620 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-11 00:53:43.138628 | orchestrator | Tuesday 11 November 2025 00:43:28 +0000 (0:00:00.790) 0:00:00.790 ****** 2025-11-11 00:53:43.138637 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.138646 | orchestrator | 2025-11-11 00:53:43.138653 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-11 00:53:43.138661 | orchestrator | Tuesday 11 November 2025 00:43:29 +0000 (0:00:01.129) 0:00:01.920 ****** 2025-11-11 00:53:43.138668 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.138675 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.138683 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.138690 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.138697 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.138704 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.138711 | orchestrator | 2025-11-11 00:53:43.138718 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-11 00:53:43.138725 | orchestrator | Tuesday 11 November 2025 00:43:31 +0000 (0:00:01.708) 0:00:03.629 ****** 2025-11-11 00:53:43.138732 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.138739 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.138746 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.138753 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.138760 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.138767 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.138774 | orchestrator | 2025-11-11 00:53:43.138781 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-11 00:53:43.138788 | orchestrator | Tuesday 11 November 2025 00:43:32 +0000 (0:00:00.658) 0:00:04.287 ****** 2025-11-11 00:53:43.138795 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.138802 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.138810 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.138816 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.138823 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.138830 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.138837 | orchestrator | 2025-11-11 00:53:43.138845 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-11 00:53:43.138852 | orchestrator | Tuesday 11 November 2025 00:43:32 +0000 (0:00:00.812) 0:00:05.099 ****** 2025-11-11 00:53:43.138859 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.138866 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.138873 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.138880 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.138897 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.138904 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.138911 | orchestrator | 2025-11-11 00:53:43.138918 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-11 00:53:43.138926 | orchestrator | Tuesday 11 November 2025 00:43:33 +0000 (0:00:00.760) 0:00:05.860 ****** 2025-11-11 00:53:43.138933 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.138940 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.138947 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.138954 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.138961 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.138968 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.138975 | orchestrator | 2025-11-11 00:53:43.138982 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-11 00:53:43.138989 | orchestrator | Tuesday 11 November 2025 00:43:34 +0000 (0:00:00.528) 0:00:06.388 ****** 2025-11-11 00:53:43.138997 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.139004 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.139011 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.139018 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.139035 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.139043 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.139050 | orchestrator | 2025-11-11 00:53:43.139057 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-11 00:53:43.139064 | orchestrator | Tuesday 11 November 2025 00:43:34 +0000 (0:00:00.710) 0:00:07.099 ****** 2025-11-11 00:53:43.139117 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139126 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.139133 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.139142 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.139150 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.139158 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.139166 | orchestrator | 2025-11-11 00:53:43.139174 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-11 00:53:43.139183 | orchestrator | Tuesday 11 November 2025 00:43:35 +0000 (0:00:00.599) 0:00:07.699 ****** 2025-11-11 00:53:43.139191 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.139199 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.139207 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.139225 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.139233 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.139242 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.139250 | orchestrator | 2025-11-11 00:53:43.139258 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-11 00:53:43.139267 | orchestrator | Tuesday 11 November 2025 00:43:36 +0000 (0:00:00.736) 0:00:08.436 ****** 2025-11-11 00:53:43.139275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:53:43.139284 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.139292 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.139300 | orchestrator | 2025-11-11 00:53:43.139308 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-11 00:53:43.139316 | orchestrator | Tuesday 11 November 2025 00:43:37 +0000 (0:00:00.728) 0:00:09.164 ****** 2025-11-11 00:53:43.139325 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.139333 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.139341 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.139357 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.139366 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.139375 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.139383 | orchestrator | 2025-11-11 00:53:43.139417 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-11 00:53:43.139426 | orchestrator | Tuesday 11 November 2025 00:43:38 +0000 (0:00:01.567) 0:00:10.731 ****** 2025-11-11 00:53:43.139440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:53:43.139448 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.139456 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.139465 | orchestrator | 2025-11-11 00:53:43.139473 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-11 00:53:43.139481 | orchestrator | Tuesday 11 November 2025 00:43:41 +0000 (0:00:02.692) 0:00:13.424 ****** 2025-11-11 00:53:43.139490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-11 00:53:43.139498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-11 00:53:43.139507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-11 00:53:43.139515 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139523 | orchestrator | 2025-11-11 00:53:43.139531 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-11 00:53:43.139539 | orchestrator | Tuesday 11 November 2025 00:43:41 +0000 (0:00:00.441) 0:00:13.866 ****** 2025-11-11 00:53:43.139549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139574 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139581 | orchestrator | 2025-11-11 00:53:43.139588 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-11 00:53:43.139595 | orchestrator | Tuesday 11 November 2025 00:43:42 +0000 (0:00:00.897) 0:00:14.763 ****** 2025-11-11 00:53:43.139611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139646 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139653 | orchestrator | 2025-11-11 00:53:43.139660 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-11 00:53:43.139667 | orchestrator | Tuesday 11 November 2025 00:43:42 +0000 (0:00:00.334) 0:00:15.097 ****** 2025-11-11 00:53:43.139682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-11 00:43:39.139265', 'end': '2025-11-11 00:43:39.410355', 'delta': '0:00:00.271090', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-11 00:43:39.956083', 'end': '2025-11-11 00:43:40.235321', 'delta': '0:00:00.279238', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-11 00:43:40.859428', 'end': '2025-11-11 00:43:41.106513', 'delta': '0:00:00.247085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.139716 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139723 | orchestrator | 2025-11-11 00:53:43.139730 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-11 00:53:43.139737 | orchestrator | Tuesday 11 November 2025 00:43:43 +0000 (0:00:00.484) 0:00:15.582 ****** 2025-11-11 00:53:43.139745 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.139752 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.139759 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.139766 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.139773 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.139780 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.139787 | orchestrator | 2025-11-11 00:53:43.139795 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-11 00:53:43.139802 | orchestrator | Tuesday 11 November 2025 00:43:44 +0000 (0:00:01.398) 0:00:16.981 ****** 2025-11-11 00:53:43.139809 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.139816 | orchestrator | 2025-11-11 00:53:43.139823 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-11 00:53:43.139830 | orchestrator | Tuesday 11 November 2025 00:43:45 +0000 (0:00:00.798) 0:00:17.779 ****** 2025-11-11 00:53:43.139838 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139845 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.139852 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.139859 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.139866 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.139873 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.139880 | orchestrator | 2025-11-11 00:53:43.139887 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-11 00:53:43.139894 | orchestrator | Tuesday 11 November 2025 00:43:47 +0000 (0:00:01.371) 0:00:19.150 ****** 2025-11-11 00:53:43.139906 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139913 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.139920 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.139927 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.139934 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.139941 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.139948 | orchestrator | 2025-11-11 00:53:43.139956 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-11 00:53:43.139963 | orchestrator | Tuesday 11 November 2025 00:43:48 +0000 (0:00:01.150) 0:00:20.301 ****** 2025-11-11 00:53:43.139970 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.139977 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.139984 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.139991 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.139998 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140005 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140012 | orchestrator | 2025-11-11 00:53:43.140020 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-11 00:53:43.140027 | orchestrator | Tuesday 11 November 2025 00:43:49 +0000 (0:00:00.916) 0:00:21.217 ****** 2025-11-11 00:53:43.140034 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140041 | orchestrator | 2025-11-11 00:53:43.140048 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-11 00:53:43.140056 | orchestrator | Tuesday 11 November 2025 00:43:49 +0000 (0:00:00.108) 0:00:21.326 ****** 2025-11-11 00:53:43.140063 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140070 | orchestrator | 2025-11-11 00:53:43.140077 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-11 00:53:43.140084 | orchestrator | Tuesday 11 November 2025 00:43:49 +0000 (0:00:00.199) 0:00:21.525 ****** 2025-11-11 00:53:43.140091 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140098 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140106 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140116 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140124 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140131 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140138 | orchestrator | 2025-11-11 00:53:43.140148 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-11 00:53:43.140156 | orchestrator | Tuesday 11 November 2025 00:43:49 +0000 (0:00:00.532) 0:00:22.058 ****** 2025-11-11 00:53:43.140163 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140170 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140177 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140184 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140191 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140198 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140205 | orchestrator | 2025-11-11 00:53:43.140213 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-11 00:53:43.140220 | orchestrator | Tuesday 11 November 2025 00:43:50 +0000 (0:00:00.643) 0:00:22.701 ****** 2025-11-11 00:53:43.140227 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140237 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140249 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140261 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140273 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140287 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140304 | orchestrator | 2025-11-11 00:53:43.140316 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-11 00:53:43.140327 | orchestrator | Tuesday 11 November 2025 00:43:51 +0000 (0:00:00.508) 0:00:23.210 ****** 2025-11-11 00:53:43.140338 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140349 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140369 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140380 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140409 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140422 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140434 | orchestrator | 2025-11-11 00:53:43.140446 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-11 00:53:43.140457 | orchestrator | Tuesday 11 November 2025 00:43:51 +0000 (0:00:00.743) 0:00:23.953 ****** 2025-11-11 00:53:43.140469 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140476 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140483 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140490 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140497 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140504 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140511 | orchestrator | 2025-11-11 00:53:43.140519 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-11 00:53:43.140526 | orchestrator | Tuesday 11 November 2025 00:43:52 +0000 (0:00:00.418) 0:00:24.371 ****** 2025-11-11 00:53:43.140533 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140540 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140547 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140554 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140561 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140568 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140575 | orchestrator | 2025-11-11 00:53:43.140582 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-11 00:53:43.140589 | orchestrator | Tuesday 11 November 2025 00:43:52 +0000 (0:00:00.615) 0:00:24.986 ****** 2025-11-11 00:53:43.140596 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140603 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.140610 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.140617 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.140624 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.140631 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.140638 | orchestrator | 2025-11-11 00:53:43.140645 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-11 00:53:43.140653 | orchestrator | Tuesday 11 November 2025 00:43:53 +0000 (0:00:00.479) 0:00:25.466 ****** 2025-11-11 00:53:43.140661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962', 'dm-uuid-LVM-S3eHSIHD1uB7sO1A8koWrLKT6fx6SNzYKx9W40acTdBKUd94RLezbMeDN5mN8Ppa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2', 'dm-uuid-LVM-w2QPVVCR86DwYlu6QkrJB3O0tNM0SE156HGTBXUZ41JMh0kCuzp1wpN1AfNOcRPH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHG2AP-RMJ2-XU8z-urBi-TjE9-JjnK-7sRCVo', 'scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882', 'scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bitFob-cdYD-3rME-pWpf-d0Oe-tZrO-DmmTUg', 'scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d', 'scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8', 'dm-uuid-LVM-rmBHXKFLezqR10dP8H8U0r7XHP1E6d7zmL7pdLKpm9l3PbZSXHgncigSDt2qbhDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f', 'scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf', 'dm-uuid-LVM-3K6zWV4stuwcIKNGbseHBnjnPBQejVH5dka1KYQmQi8xGrRJGL8kZuALLXlxS0jx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140888 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.140897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.140958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-o3BmgF-O8xS-vwKg-1Fio-AIbW-OsjX-fvHcQf', 'scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34', 'scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.140979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgDw9r-q7oi-0h92-qUur-IBWW-GhWX-g3sn3E', 'scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3', 'scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860', 'scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e', 'dm-uuid-LVM-vabDWv0fZujdkgKW70tGqRuYZFTGJ2DYEcNW99loAKZ0E3ZBfyz83GFwvhxd4o8Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6', 'dm-uuid-LVM-Nnz1FmMFX1o5YKqamCRJyumvXH3t2V0QCTNvmf9iEynTtPBkYcJamWNGQMCvfsTh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141475 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.141492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6ebA-L3W2-mfXJ-5sdZ-KmFZ-RNtR-ZRy3R1', 'scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e', 'scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DqlukF-ForK-kz1J-Gc1r-CrEx-haMu-1ZiUZB', 'scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7', 'scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c', 'scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part1', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part14', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part15', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part16', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.141961 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.141980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.141997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part1', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part14', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part15', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part16', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.142228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.142249 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.142258 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.142267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:53:43.142380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.142462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:53:43.142480 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.142495 | orchestrator | 2025-11-11 00:53:43.142510 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-11 00:53:43.142524 | orchestrator | Tuesday 11 November 2025 00:43:54 +0000 (0:00:01.003) 0:00:26.470 ****** 2025-11-11 00:53:43.142547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962', 'dm-uuid-LVM-S3eHSIHD1uB7sO1A8koWrLKT6fx6SNzYKx9W40acTdBKUd94RLezbMeDN5mN8Ppa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.142563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2', 'dm-uuid-LVM-w2QPVVCR86DwYlu6QkrJB3O0tNM0SE156HGTBXUZ41JMh0kCuzp1wpN1AfNOcRPH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.142590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.142605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.142620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8', 'dm-uuid-LVM-rmBHXKFLezqR10dP8H8U0r7XHP1E6d7zmL7pdLKpm9l3PbZSXHgncigSDt2qbhDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHG2AP-RMJ2-XU8z-urBi-TjE9-JjnK-7sRCVo', 'scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882', 'scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf', 'dm-uuid-LVM-3K6zWV4stuwcIKNGbseHBnjnPBQejVH5dka1KYQmQi8xGrRJGL8kZuALLXlxS0jx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144689 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bitFob-cdYD-3rME-pWpf-d0Oe-tZrO-DmmTUg', 'scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d', 'scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f', 'scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144779 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e', 'dm-uuid-LVM-vabDWv0fZujdkgKW70tGqRuYZFTGJ2DYEcNW99loAKZ0E3ZBfyz83GFwvhxd4o8Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6', 'dm-uuid-LVM-Nnz1FmMFX1o5YKqamCRJyumvXH3t2V0QCTNvmf9iEynTtPBkYcJamWNGQMCvfsTh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.144991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145023 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.145038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-o3BmgF-O8xS-vwKg-1Fio-AIbW-OsjX-fvHcQf', 'scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34', 'scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145064 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgDw9r-q7oi-0h92-qUur-IBWW-GhWX-g3sn3E', 'scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3', 'scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145090 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860', 'scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6ebA-L3W2-mfXJ-5sdZ-KmFZ-RNtR-ZRy3R1', 'scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e', 'scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DqlukF-ForK-kz1J-Gc1r-CrEx-haMu-1ZiUZB', 'scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7', 'scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c', 'scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145254 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145271 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145283 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145317 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145358 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.145369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145381 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part1', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part14', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part15', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part16', 'scsi-SQEMU_QEMU_HARDDISK_4d550be8-3d05-49fa-a4d6-b58d7283d515-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145451 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145471 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145482 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.145493 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146215 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146241 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146254 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146324 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part1', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part14', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part15', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part16', 'scsi-SQEMU_QEMU_HARDDISK_17edd79b-0338-48a8-aec5-a06e3eed4f01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146339 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146375 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.146389 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.146461 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.146475 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146488 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146532 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146544 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146683 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146697 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146715 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fce1e7d-7889-4141-aff9-09cb3f25b974-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:53:43.146746 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.146758 | orchestrator | 2025-11-11 00:53:43.146775 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-11 00:53:43.146788 | orchestrator | Tuesday 11 November 2025 00:43:55 +0000 (0:00:01.155) 0:00:27.625 ****** 2025-11-11 00:53:43.146800 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.146811 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.146822 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.146832 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.146843 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.146853 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.146864 | orchestrator | 2025-11-11 00:53:43.146875 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-11 00:53:43.146885 | orchestrator | Tuesday 11 November 2025 00:43:56 +0000 (0:00:01.018) 0:00:28.643 ****** 2025-11-11 00:53:43.146896 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.146906 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.146917 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.146927 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.146938 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.146948 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.146958 | orchestrator | 2025-11-11 00:53:43.146968 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-11 00:53:43.146978 | orchestrator | Tuesday 11 November 2025 00:43:57 +0000 (0:00:00.489) 0:00:29.132 ****** 2025-11-11 00:53:43.146987 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.146997 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147006 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147016 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147025 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147034 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147044 | orchestrator | 2025-11-11 00:53:43.147053 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-11 00:53:43.147063 | orchestrator | Tuesday 11 November 2025 00:43:57 +0000 (0:00:00.594) 0:00:29.727 ****** 2025-11-11 00:53:43.147072 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147081 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147091 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147100 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147109 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147119 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147128 | orchestrator | 2025-11-11 00:53:43.147138 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-11 00:53:43.147147 | orchestrator | Tuesday 11 November 2025 00:43:58 +0000 (0:00:00.564) 0:00:30.292 ****** 2025-11-11 00:53:43.147157 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147166 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147180 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147190 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147199 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147209 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147218 | orchestrator | 2025-11-11 00:53:43.147227 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-11 00:53:43.147237 | orchestrator | Tuesday 11 November 2025 00:43:58 +0000 (0:00:00.720) 0:00:31.012 ****** 2025-11-11 00:53:43.147247 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147256 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147272 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147282 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147291 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147301 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147310 | orchestrator | 2025-11-11 00:53:43.147320 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-11 00:53:43.147329 | orchestrator | Tuesday 11 November 2025 00:43:59 +0000 (0:00:00.556) 0:00:31.569 ****** 2025-11-11 00:53:43.147338 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-11 00:53:43.147348 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-11 00:53:43.147357 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-11 00:53:43.147367 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-11 00:53:43.147376 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-11 00:53:43.147386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-11 00:53:43.147412 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-11 00:53:43.147422 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-11-11 00:53:43.147431 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-11 00:53:43.147441 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-11-11 00:53:43.147450 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-11 00:53:43.147459 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-11 00:53:43.147468 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-11-11 00:53:43.147478 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-11 00:53:43.147487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-11 00:53:43.147496 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-11-11 00:53:43.147506 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-11-11 00:53:43.147515 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-11-11 00:53:43.147524 | orchestrator | 2025-11-11 00:53:43.147534 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-11 00:53:43.147543 | orchestrator | Tuesday 11 November 2025 00:44:02 +0000 (0:00:02.617) 0:00:34.187 ****** 2025-11-11 00:53:43.147553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-11 00:53:43.147563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-11 00:53:43.147572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-11 00:53:43.147582 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-11 00:53:43.147601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-11 00:53:43.147611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-11 00:53:43.147620 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147630 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-11 00:53:43.147644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-11 00:53:43.147655 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-11 00:53:43.147664 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:53:43.147683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:53:43.147693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:53:43.147702 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-11 00:53:43.147721 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-11 00:53:43.147730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-11 00:53:43.147740 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147749 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-11 00:53:43.147765 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-11 00:53:43.147775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-11 00:53:43.147784 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147794 | orchestrator | 2025-11-11 00:53:43.147804 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-11 00:53:43.147813 | orchestrator | Tuesday 11 November 2025 00:44:02 +0000 (0:00:00.682) 0:00:34.869 ****** 2025-11-11 00:53:43.147823 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.147832 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.147841 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.147852 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.147861 | orchestrator | 2025-11-11 00:53:43.147871 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-11 00:53:43.147881 | orchestrator | Tuesday 11 November 2025 00:44:03 +0000 (0:00:00.985) 0:00:35.855 ****** 2025-11-11 00:53:43.147891 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147901 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147910 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147919 | orchestrator | 2025-11-11 00:53:43.147929 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-11 00:53:43.147946 | orchestrator | Tuesday 11 November 2025 00:44:04 +0000 (0:00:00.341) 0:00:36.196 ****** 2025-11-11 00:53:43.147956 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.147966 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.147975 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.147985 | orchestrator | 2025-11-11 00:53:43.147994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-11 00:53:43.148004 | orchestrator | Tuesday 11 November 2025 00:44:04 +0000 (0:00:00.360) 0:00:36.557 ****** 2025-11-11 00:53:43.148013 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.148023 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.148032 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.148041 | orchestrator | 2025-11-11 00:53:43.148051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-11 00:53:43.148060 | orchestrator | Tuesday 11 November 2025 00:44:04 +0000 (0:00:00.460) 0:00:37.018 ****** 2025-11-11 00:53:43.148070 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.148079 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.148089 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.148098 | orchestrator | 2025-11-11 00:53:43.148108 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-11 00:53:43.148117 | orchestrator | Tuesday 11 November 2025 00:44:05 +0000 (0:00:00.760) 0:00:37.779 ****** 2025-11-11 00:53:43.148127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.148136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.148145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.148155 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.148164 | orchestrator | 2025-11-11 00:53:43.148174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-11 00:53:43.148183 | orchestrator | Tuesday 11 November 2025 00:44:06 +0000 (0:00:00.391) 0:00:38.170 ****** 2025-11-11 00:53:43.148193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.148202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.148212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.148221 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.148230 | orchestrator | 2025-11-11 00:53:43.148240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-11 00:53:43.148249 | orchestrator | Tuesday 11 November 2025 00:44:06 +0000 (0:00:00.405) 0:00:38.575 ****** 2025-11-11 00:53:43.148264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.148274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.148283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.148293 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.148302 | orchestrator | 2025-11-11 00:53:43.148312 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-11 00:53:43.148321 | orchestrator | Tuesday 11 November 2025 00:44:06 +0000 (0:00:00.400) 0:00:38.976 ****** 2025-11-11 00:53:43.148330 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.148340 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.148349 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.148359 | orchestrator | 2025-11-11 00:53:43.148368 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-11 00:53:43.148378 | orchestrator | Tuesday 11 November 2025 00:44:07 +0000 (0:00:00.471) 0:00:39.447 ****** 2025-11-11 00:53:43.148388 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-11 00:53:43.148410 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-11 00:53:43.148426 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-11 00:53:43.148436 | orchestrator | 2025-11-11 00:53:43.148445 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-11 00:53:43.148455 | orchestrator | Tuesday 11 November 2025 00:44:08 +0000 (0:00:01.317) 0:00:40.764 ****** 2025-11-11 00:53:43.148464 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:53:43.148475 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.148484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.148494 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-11 00:53:43.148503 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-11 00:53:43.148512 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-11 00:53:43.148522 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-11 00:53:43.148531 | orchestrator | 2025-11-11 00:53:43.148541 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-11 00:53:43.148550 | orchestrator | Tuesday 11 November 2025 00:44:10 +0000 (0:00:01.401) 0:00:42.166 ****** 2025-11-11 00:53:43.148559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:53:43.148569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.148578 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.148588 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-11 00:53:43.148597 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-11 00:53:43.148607 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-11 00:53:43.148616 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-11 00:53:43.148626 | orchestrator | 2025-11-11 00:53:43.148635 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.148649 | orchestrator | Tuesday 11 November 2025 00:44:12 +0000 (0:00:02.275) 0:00:44.442 ****** 2025-11-11 00:53:43.148660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.148671 | orchestrator | 2025-11-11 00:53:43.148680 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.148690 | orchestrator | Tuesday 11 November 2025 00:44:14 +0000 (0:00:02.136) 0:00:46.578 ****** 2025-11-11 00:53:43.148706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.148715 | orchestrator | 2025-11-11 00:53:43.148725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.148734 | orchestrator | Tuesday 11 November 2025 00:44:15 +0000 (0:00:01.499) 0:00:48.077 ****** 2025-11-11 00:53:43.148743 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.148753 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.148762 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.148772 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.148781 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.148791 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.148800 | orchestrator | 2025-11-11 00:53:43.148809 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.148819 | orchestrator | Tuesday 11 November 2025 00:44:16 +0000 (0:00:00.976) 0:00:49.054 ****** 2025-11-11 00:53:43.148828 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.148838 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.148847 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.148856 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.148866 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.148875 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.148885 | orchestrator | 2025-11-11 00:53:43.148894 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.148904 | orchestrator | Tuesday 11 November 2025 00:44:18 +0000 (0:00:01.129) 0:00:50.183 ****** 2025-11-11 00:53:43.148913 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.148923 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.148932 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.148941 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.148951 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.148960 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.148970 | orchestrator | 2025-11-11 00:53:43.148979 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.148989 | orchestrator | Tuesday 11 November 2025 00:44:18 +0000 (0:00:00.747) 0:00:50.931 ****** 2025-11-11 00:53:43.148998 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149007 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149017 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149026 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149035 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149045 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149054 | orchestrator | 2025-11-11 00:53:43.149064 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.149073 | orchestrator | Tuesday 11 November 2025 00:44:19 +0000 (0:00:01.159) 0:00:52.091 ****** 2025-11-11 00:53:43.149083 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.149092 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.149101 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.149110 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.149120 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.149135 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.149145 | orchestrator | 2025-11-11 00:53:43.149154 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.149164 | orchestrator | Tuesday 11 November 2025 00:44:20 +0000 (0:00:00.941) 0:00:53.032 ****** 2025-11-11 00:53:43.149173 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.149183 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.149192 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.149202 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149211 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149220 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149235 | orchestrator | 2025-11-11 00:53:43.149245 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.149255 | orchestrator | Tuesday 11 November 2025 00:44:21 +0000 (0:00:00.627) 0:00:53.659 ****** 2025-11-11 00:53:43.149264 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.149274 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.149283 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.149293 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149302 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149311 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149320 | orchestrator | 2025-11-11 00:53:43.149330 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.149339 | orchestrator | Tuesday 11 November 2025 00:44:22 +0000 (0:00:00.583) 0:00:54.243 ****** 2025-11-11 00:53:43.149349 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149358 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149368 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149377 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.149387 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.149410 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.149420 | orchestrator | 2025-11-11 00:53:43.149430 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.149439 | orchestrator | Tuesday 11 November 2025 00:44:23 +0000 (0:00:01.137) 0:00:55.380 ****** 2025-11-11 00:53:43.149448 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149458 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149467 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149477 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.149486 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.149495 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.149504 | orchestrator | 2025-11-11 00:53:43.149514 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.149523 | orchestrator | Tuesday 11 November 2025 00:44:24 +0000 (0:00:01.008) 0:00:56.389 ****** 2025-11-11 00:53:43.149537 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.149547 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.149557 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.149566 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149576 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149585 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149594 | orchestrator | 2025-11-11 00:53:43.149604 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.149613 | orchestrator | Tuesday 11 November 2025 00:44:25 +0000 (0:00:00.728) 0:00:57.118 ****** 2025-11-11 00:53:43.149623 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.149632 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.149641 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.149650 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.149660 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.149669 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.149678 | orchestrator | 2025-11-11 00:53:43.149688 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.149697 | orchestrator | Tuesday 11 November 2025 00:44:25 +0000 (0:00:00.452) 0:00:57.571 ****** 2025-11-11 00:53:43.149707 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149716 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149725 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149735 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149744 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149753 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149763 | orchestrator | 2025-11-11 00:53:43.149772 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.149782 | orchestrator | Tuesday 11 November 2025 00:44:26 +0000 (0:00:00.730) 0:00:58.301 ****** 2025-11-11 00:53:43.149791 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149806 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149816 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149825 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149835 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149844 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149853 | orchestrator | 2025-11-11 00:53:43.149863 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.149872 | orchestrator | Tuesday 11 November 2025 00:44:26 +0000 (0:00:00.565) 0:00:58.867 ****** 2025-11-11 00:53:43.149889 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.149905 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.149920 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.149935 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.149949 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.149964 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.149980 | orchestrator | 2025-11-11 00:53:43.149995 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.150009 | orchestrator | Tuesday 11 November 2025 00:44:27 +0000 (0:00:00.691) 0:00:59.558 ****** 2025-11-11 00:53:43.150068 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.150086 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.150102 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.150118 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.150134 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.150147 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.150157 | orchestrator | 2025-11-11 00:53:43.150166 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.150175 | orchestrator | Tuesday 11 November 2025 00:44:28 +0000 (0:00:00.601) 0:01:00.160 ****** 2025-11-11 00:53:43.150185 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.150195 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.150204 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.150213 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.150231 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.150241 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.150250 | orchestrator | 2025-11-11 00:53:43.150260 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.150269 | orchestrator | Tuesday 11 November 2025 00:44:28 +0000 (0:00:00.736) 0:01:00.896 ****** 2025-11-11 00:53:43.150278 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.150288 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.150297 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.150306 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.150316 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.150325 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.150334 | orchestrator | 2025-11-11 00:53:43.150344 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.150353 | orchestrator | Tuesday 11 November 2025 00:44:29 +0000 (0:00:00.509) 0:01:01.406 ****** 2025-11-11 00:53:43.150363 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.150372 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.150381 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.150446 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.150460 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.150469 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.150479 | orchestrator | 2025-11-11 00:53:43.150487 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.150495 | orchestrator | Tuesday 11 November 2025 00:44:29 +0000 (0:00:00.654) 0:01:02.060 ****** 2025-11-11 00:53:43.150503 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.150510 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.150518 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.150526 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.150533 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.150541 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.150557 | orchestrator | 2025-11-11 00:53:43.150565 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-11-11 00:53:43.150573 | orchestrator | Tuesday 11 November 2025 00:44:30 +0000 (0:00:01.007) 0:01:03.067 ****** 2025-11-11 00:53:43.150581 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.150588 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.150596 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.150604 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.150612 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.150619 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.150627 | orchestrator | 2025-11-11 00:53:43.150635 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-11-11 00:53:43.150648 | orchestrator | Tuesday 11 November 2025 00:44:32 +0000 (0:00:01.411) 0:01:04.479 ****** 2025-11-11 00:53:43.150660 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.150673 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.150693 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.150709 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.150723 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.150736 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.150748 | orchestrator | 2025-11-11 00:53:43.150761 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-11-11 00:53:43.150774 | orchestrator | Tuesday 11 November 2025 00:44:34 +0000 (0:00:02.248) 0:01:06.727 ****** 2025-11-11 00:53:43.150787 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.150802 | orchestrator | 2025-11-11 00:53:43.150813 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-11-11 00:53:43.150826 | orchestrator | Tuesday 11 November 2025 00:44:35 +0000 (0:00:01.080) 0:01:07.808 ****** 2025-11-11 00:53:43.150839 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.150850 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.150863 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.150876 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.150889 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.150901 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.150915 | orchestrator | 2025-11-11 00:53:43.150927 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-11-11 00:53:43.150939 | orchestrator | Tuesday 11 November 2025 00:44:36 +0000 (0:00:00.596) 0:01:08.404 ****** 2025-11-11 00:53:43.150953 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.150965 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.150978 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.150991 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151004 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151019 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151033 | orchestrator | 2025-11-11 00:53:43.151047 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-11-11 00:53:43.151060 | orchestrator | Tuesday 11 November 2025 00:44:36 +0000 (0:00:00.568) 0:01:08.972 ****** 2025-11-11 00:53:43.151074 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151088 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151101 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151113 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151121 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151129 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-11 00:53:43.151137 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151157 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151174 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151192 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151227 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151242 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-11 00:53:43.151255 | orchestrator | 2025-11-11 00:53:43.151268 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-11-11 00:53:43.151283 | orchestrator | Tuesday 11 November 2025 00:44:38 +0000 (0:00:01.531) 0:01:10.504 ****** 2025-11-11 00:53:43.151296 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.151309 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.151318 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.151326 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.151333 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.151341 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.151348 | orchestrator | 2025-11-11 00:53:43.151356 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-11-11 00:53:43.151364 | orchestrator | Tuesday 11 November 2025 00:44:39 +0000 (0:00:00.916) 0:01:11.421 ****** 2025-11-11 00:53:43.151371 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.151379 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.151386 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.151417 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151426 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151434 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151441 | orchestrator | 2025-11-11 00:53:43.151449 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-11-11 00:53:43.151457 | orchestrator | Tuesday 11 November 2025 00:44:40 +0000 (0:00:00.795) 0:01:12.216 ****** 2025-11-11 00:53:43.151464 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.151472 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.151480 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.151487 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151495 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151502 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151510 | orchestrator | 2025-11-11 00:53:43.151518 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-11-11 00:53:43.151525 | orchestrator | Tuesday 11 November 2025 00:44:40 +0000 (0:00:00.622) 0:01:12.839 ****** 2025-11-11 00:53:43.151533 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.151541 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.151548 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.151556 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151570 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151578 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151585 | orchestrator | 2025-11-11 00:53:43.151593 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-11-11 00:53:43.151601 | orchestrator | Tuesday 11 November 2025 00:44:41 +0000 (0:00:00.820) 0:01:13.659 ****** 2025-11-11 00:53:43.151609 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.151617 | orchestrator | 2025-11-11 00:53:43.151625 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-11-11 00:53:43.151632 | orchestrator | Tuesday 11 November 2025 00:44:42 +0000 (0:00:01.255) 0:01:14.915 ****** 2025-11-11 00:53:43.151640 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.151648 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.151663 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.151671 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.151679 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.151687 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.151694 | orchestrator | 2025-11-11 00:53:43.151702 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-11-11 00:53:43.151710 | orchestrator | Tuesday 11 November 2025 00:45:30 +0000 (0:00:47.414) 0:02:02.329 ****** 2025-11-11 00:53:43.151718 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151726 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151733 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151741 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.151749 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151757 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151764 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151772 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.151780 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151787 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151795 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151803 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.151811 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151818 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151826 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151834 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151842 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151849 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151857 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151865 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151880 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-11 00:53:43.151889 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-11 00:53:43.151897 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-11 00:53:43.151905 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151912 | orchestrator | 2025-11-11 00:53:43.151920 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-11-11 00:53:43.151928 | orchestrator | Tuesday 11 November 2025 00:45:31 +0000 (0:00:00.796) 0:02:03.126 ****** 2025-11-11 00:53:43.151936 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.151944 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.151951 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.151959 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.151967 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.151975 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.151983 | orchestrator | 2025-11-11 00:53:43.151991 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-11-11 00:53:43.151999 | orchestrator | Tuesday 11 November 2025 00:45:31 +0000 (0:00:00.535) 0:02:03.661 ****** 2025-11-11 00:53:43.152007 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152014 | orchestrator | 2025-11-11 00:53:43.152022 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-11-11 00:53:43.152030 | orchestrator | Tuesday 11 November 2025 00:45:31 +0000 (0:00:00.157) 0:02:03.819 ****** 2025-11-11 00:53:43.152043 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152051 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152059 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152067 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152075 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152082 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152090 | orchestrator | 2025-11-11 00:53:43.152098 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-11-11 00:53:43.152106 | orchestrator | Tuesday 11 November 2025 00:45:32 +0000 (0:00:00.812) 0:02:04.632 ****** 2025-11-11 00:53:43.152114 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152121 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152129 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152137 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152145 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152152 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152160 | orchestrator | 2025-11-11 00:53:43.152168 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-11-11 00:53:43.152179 | orchestrator | Tuesday 11 November 2025 00:45:33 +0000 (0:00:00.589) 0:02:05.222 ****** 2025-11-11 00:53:43.152187 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152195 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152202 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152210 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152218 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152225 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152233 | orchestrator | 2025-11-11 00:53:43.152241 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-11-11 00:53:43.152249 | orchestrator | Tuesday 11 November 2025 00:45:34 +0000 (0:00:00.897) 0:02:06.119 ****** 2025-11-11 00:53:43.152257 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.152265 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.152272 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.152280 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.152288 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.152297 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.152310 | orchestrator | 2025-11-11 00:53:43.152324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-11-11 00:53:43.152344 | orchestrator | Tuesday 11 November 2025 00:45:37 +0000 (0:00:03.591) 0:02:09.711 ****** 2025-11-11 00:53:43.152359 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.152371 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.152383 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.152448 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.152463 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.152475 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.152489 | orchestrator | 2025-11-11 00:53:43.152503 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-11-11 00:53:43.152516 | orchestrator | Tuesday 11 November 2025 00:45:38 +0000 (0:00:00.760) 0:02:10.471 ****** 2025-11-11 00:53:43.152529 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.152544 | orchestrator | 2025-11-11 00:53:43.152557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-11-11 00:53:43.152571 | orchestrator | Tuesday 11 November 2025 00:45:39 +0000 (0:00:01.155) 0:02:11.626 ****** 2025-11-11 00:53:43.152584 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152598 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152611 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152624 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152645 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152658 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152671 | orchestrator | 2025-11-11 00:53:43.152693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-11-11 00:53:43.152706 | orchestrator | Tuesday 11 November 2025 00:45:40 +0000 (0:00:00.594) 0:02:12.221 ****** 2025-11-11 00:53:43.152717 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152730 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152743 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152757 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152767 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152775 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152783 | orchestrator | 2025-11-11 00:53:43.152791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-11-11 00:53:43.152798 | orchestrator | Tuesday 11 November 2025 00:45:40 +0000 (0:00:00.832) 0:02:13.054 ****** 2025-11-11 00:53:43.152806 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152814 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152829 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152837 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152845 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152853 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152861 | orchestrator | 2025-11-11 00:53:43.152869 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-11-11 00:53:43.152876 | orchestrator | Tuesday 11 November 2025 00:45:41 +0000 (0:00:00.646) 0:02:13.700 ****** 2025-11-11 00:53:43.152884 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152892 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152900 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152907 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152915 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152923 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.152930 | orchestrator | 2025-11-11 00:53:43.152938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-11-11 00:53:43.152946 | orchestrator | Tuesday 11 November 2025 00:45:42 +0000 (0:00:00.790) 0:02:14.491 ****** 2025-11-11 00:53:43.152954 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.152961 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.152969 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.152977 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.152984 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.152992 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.153000 | orchestrator | 2025-11-11 00:53:43.153007 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-11-11 00:53:43.153015 | orchestrator | Tuesday 11 November 2025 00:45:42 +0000 (0:00:00.576) 0:02:15.068 ****** 2025-11-11 00:53:43.153023 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.153030 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.153038 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.153046 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.153054 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.153061 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.153069 | orchestrator | 2025-11-11 00:53:43.153077 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-11-11 00:53:43.153085 | orchestrator | Tuesday 11 November 2025 00:45:43 +0000 (0:00:00.766) 0:02:15.834 ****** 2025-11-11 00:53:43.153092 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.153100 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.153108 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.153115 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.153123 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.153131 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.153138 | orchestrator | 2025-11-11 00:53:43.153157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-11-11 00:53:43.153165 | orchestrator | Tuesday 11 November 2025 00:45:44 +0000 (0:00:00.579) 0:02:16.414 ****** 2025-11-11 00:53:43.153181 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.153188 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.153196 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.153204 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.153211 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.153219 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.153226 | orchestrator | 2025-11-11 00:53:43.153234 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-11-11 00:53:43.153242 | orchestrator | Tuesday 11 November 2025 00:45:45 +0000 (0:00:00.747) 0:02:17.161 ****** 2025-11-11 00:53:43.153250 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.153258 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.153265 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.153273 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.153281 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.153289 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.153296 | orchestrator | 2025-11-11 00:53:43.153304 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-11-11 00:53:43.153312 | orchestrator | Tuesday 11 November 2025 00:45:46 +0000 (0:00:01.177) 0:02:18.338 ****** 2025-11-11 00:53:43.153321 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.153329 | orchestrator | 2025-11-11 00:53:43.153337 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-11-11 00:53:43.153344 | orchestrator | Tuesday 11 November 2025 00:45:47 +0000 (0:00:00.954) 0:02:19.293 ****** 2025-11-11 00:53:43.153352 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-11-11 00:53:43.153360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153368 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-11-11 00:53:43.153376 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-11-11 00:53:43.153384 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-11-11 00:53:43.153443 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-11-11 00:53:43.153454 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-11-11 00:53:43.153462 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153470 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153485 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153501 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-11-11 00:53:43.153509 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153516 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153524 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153540 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-11-11 00:53:43.153561 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153569 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153599 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-11-11 00:53:43.153607 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153628 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153651 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-11-11 00:53:43.153666 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153673 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153681 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153689 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-11-11 00:53:43.153712 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153719 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153727 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153750 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-11-11 00:53:43.153758 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153766 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153775 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153789 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153802 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153815 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-11-11 00:53:43.153827 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153854 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153863 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153870 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-11 00:53:43.153883 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153896 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153902 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153909 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.153916 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-11 00:53:43.153922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153929 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153935 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.153942 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153948 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-11 00:53:43.153967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.153974 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.153980 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.153987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.153994 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.154000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-11 00:53:43.154007 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.154114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.154135 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-11-11 00:53:43.154143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.154149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.154156 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-11 00:53:43.154162 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.154169 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-11-11 00:53:43.154175 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.154182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.154189 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.154195 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-11 00:53:43.154202 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-11-11 00:53:43.154208 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-11-11 00:53:43.154215 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-11-11 00:53:43.154222 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-11-11 00:53:43.154228 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-11-11 00:53:43.154235 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-11-11 00:53:43.154241 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-11-11 00:53:43.154247 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-11-11 00:53:43.154307 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-11-11 00:53:43.154322 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-11-11 00:53:43.154329 | orchestrator | 2025-11-11 00:53:43.154336 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-11-11 00:53:43.154342 | orchestrator | Tuesday 11 November 2025 00:45:53 +0000 (0:00:06.431) 0:02:25.724 ****** 2025-11-11 00:53:43.154349 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154356 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154362 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154372 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.154379 | orchestrator | 2025-11-11 00:53:43.154386 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-11-11 00:53:43.154408 | orchestrator | Tuesday 11 November 2025 00:45:54 +0000 (0:00:00.911) 0:02:26.636 ****** 2025-11-11 00:53:43.154415 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154422 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154435 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154442 | orchestrator | 2025-11-11 00:53:43.154448 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-11-11 00:53:43.154455 | orchestrator | Tuesday 11 November 2025 00:45:55 +0000 (0:00:00.685) 0:02:27.321 ****** 2025-11-11 00:53:43.154462 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154468 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154475 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.154482 | orchestrator | 2025-11-11 00:53:43.154488 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-11-11 00:53:43.154495 | orchestrator | Tuesday 11 November 2025 00:45:56 +0000 (0:00:01.176) 0:02:28.497 ****** 2025-11-11 00:53:43.154502 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.154508 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.154515 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.154521 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154528 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154534 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154541 | orchestrator | 2025-11-11 00:53:43.154547 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-11-11 00:53:43.154554 | orchestrator | Tuesday 11 November 2025 00:45:57 +0000 (0:00:00.752) 0:02:29.250 ****** 2025-11-11 00:53:43.154560 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.154567 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.154573 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.154580 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154586 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154593 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154599 | orchestrator | 2025-11-11 00:53:43.154606 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-11-11 00:53:43.154613 | orchestrator | Tuesday 11 November 2025 00:45:57 +0000 (0:00:00.562) 0:02:29.812 ****** 2025-11-11 00:53:43.154619 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154626 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154632 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154639 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154645 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154652 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154658 | orchestrator | 2025-11-11 00:53:43.154671 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-11-11 00:53:43.154678 | orchestrator | Tuesday 11 November 2025 00:45:58 +0000 (0:00:00.769) 0:02:30.581 ****** 2025-11-11 00:53:43.154684 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154691 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154697 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154704 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154710 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154717 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154723 | orchestrator | 2025-11-11 00:53:43.154730 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-11-11 00:53:43.154737 | orchestrator | Tuesday 11 November 2025 00:45:59 +0000 (0:00:00.611) 0:02:31.193 ****** 2025-11-11 00:53:43.154743 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154750 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154756 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154763 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154769 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154780 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154787 | orchestrator | 2025-11-11 00:53:43.154794 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-11-11 00:53:43.154800 | orchestrator | Tuesday 11 November 2025 00:45:59 +0000 (0:00:00.784) 0:02:31.978 ****** 2025-11-11 00:53:43.154807 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154813 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154820 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154826 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154833 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154839 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154846 | orchestrator | 2025-11-11 00:53:43.154852 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-11-11 00:53:43.154859 | orchestrator | Tuesday 11 November 2025 00:46:00 +0000 (0:00:00.568) 0:02:32.547 ****** 2025-11-11 00:53:43.154865 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154872 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154878 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154885 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154891 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154898 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154904 | orchestrator | 2025-11-11 00:53:43.154911 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-11-11 00:53:43.154921 | orchestrator | Tuesday 11 November 2025 00:46:01 +0000 (0:00:00.745) 0:02:33.293 ****** 2025-11-11 00:53:43.154928 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.154934 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.154941 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.154947 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154954 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154960 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.154967 | orchestrator | 2025-11-11 00:53:43.154973 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-11-11 00:53:43.154980 | orchestrator | Tuesday 11 November 2025 00:46:01 +0000 (0:00:00.583) 0:02:33.876 ****** 2025-11-11 00:53:43.154986 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.154993 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.154999 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155006 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.155013 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.155019 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.155026 | orchestrator | 2025-11-11 00:53:43.155032 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-11-11 00:53:43.155039 | orchestrator | Tuesday 11 November 2025 00:46:04 +0000 (0:00:03.042) 0:02:36.918 ****** 2025-11-11 00:53:43.155045 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.155052 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.155058 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.155065 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155071 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155078 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155084 | orchestrator | 2025-11-11 00:53:43.155091 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-11-11 00:53:43.155098 | orchestrator | Tuesday 11 November 2025 00:46:05 +0000 (0:00:00.600) 0:02:37.519 ****** 2025-11-11 00:53:43.155104 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.155111 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.155117 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.155124 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155130 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155137 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155143 | orchestrator | 2025-11-11 00:53:43.155150 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-11-11 00:53:43.155161 | orchestrator | Tuesday 11 November 2025 00:46:06 +0000 (0:00:00.735) 0:02:38.254 ****** 2025-11-11 00:53:43.155167 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155174 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155180 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155187 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155193 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155200 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155206 | orchestrator | 2025-11-11 00:53:43.155213 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-11-11 00:53:43.155220 | orchestrator | Tuesday 11 November 2025 00:46:06 +0000 (0:00:00.601) 0:02:38.856 ****** 2025-11-11 00:53:43.155226 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.155233 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.155240 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.155246 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155258 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155265 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155271 | orchestrator | 2025-11-11 00:53:43.155278 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-11-11 00:53:43.155285 | orchestrator | Tuesday 11 November 2025 00:46:07 +0000 (0:00:00.769) 0:02:39.625 ****** 2025-11-11 00:53:43.155293 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-11-11 00:53:43.155302 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-11-11 00:53:43.155310 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155317 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-11-11 00:53:43.155324 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-11-11 00:53:43.155331 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155341 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-11-11 00:53:43.155348 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-11-11 00:53:43.155355 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155361 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155373 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155379 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155386 | orchestrator | 2025-11-11 00:53:43.155406 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-11-11 00:53:43.155413 | orchestrator | Tuesday 11 November 2025 00:46:08 +0000 (0:00:00.610) 0:02:40.236 ****** 2025-11-11 00:53:43.155419 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155426 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155432 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155439 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155445 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155452 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155458 | orchestrator | 2025-11-11 00:53:43.155465 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-11-11 00:53:43.155471 | orchestrator | Tuesday 11 November 2025 00:46:08 +0000 (0:00:00.539) 0:02:40.775 ****** 2025-11-11 00:53:43.155478 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155485 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155491 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155497 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155504 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155510 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155517 | orchestrator | 2025-11-11 00:53:43.155523 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-11 00:53:43.155530 | orchestrator | Tuesday 11 November 2025 00:46:09 +0000 (0:00:00.766) 0:02:41.542 ****** 2025-11-11 00:53:43.155537 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155543 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155549 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155556 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155562 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155569 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155575 | orchestrator | 2025-11-11 00:53:43.155582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-11 00:53:43.155589 | orchestrator | Tuesday 11 November 2025 00:46:10 +0000 (0:00:00.584) 0:02:42.126 ****** 2025-11-11 00:53:43.155596 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155602 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155608 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155615 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155621 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155628 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155634 | orchestrator | 2025-11-11 00:53:43.155641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-11 00:53:43.155653 | orchestrator | Tuesday 11 November 2025 00:46:10 +0000 (0:00:00.859) 0:02:42.986 ****** 2025-11-11 00:53:43.155659 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155666 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.155672 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.155679 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155686 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155692 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155698 | orchestrator | 2025-11-11 00:53:43.155705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-11 00:53:43.155712 | orchestrator | Tuesday 11 November 2025 00:46:11 +0000 (0:00:00.679) 0:02:43.666 ****** 2025-11-11 00:53:43.155718 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.155725 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.155731 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155738 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.155745 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155751 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155762 | orchestrator | 2025-11-11 00:53:43.155769 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-11 00:53:43.155776 | orchestrator | Tuesday 11 November 2025 00:46:12 +0000 (0:00:00.893) 0:02:44.559 ****** 2025-11-11 00:53:43.155782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.155789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.155795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.155802 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155808 | orchestrator | 2025-11-11 00:53:43.155815 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-11 00:53:43.155821 | orchestrator | Tuesday 11 November 2025 00:46:12 +0000 (0:00:00.386) 0:02:44.945 ****** 2025-11-11 00:53:43.155828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.155834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.155841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.155847 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155854 | orchestrator | 2025-11-11 00:53:43.155861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-11 00:53:43.155867 | orchestrator | Tuesday 11 November 2025 00:46:13 +0000 (0:00:00.360) 0:02:45.306 ****** 2025-11-11 00:53:43.155874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.155884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.155890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.155897 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.155903 | orchestrator | 2025-11-11 00:53:43.155910 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-11 00:53:43.155917 | orchestrator | Tuesday 11 November 2025 00:46:13 +0000 (0:00:00.385) 0:02:45.691 ****** 2025-11-11 00:53:43.155923 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.155930 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.155936 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.155943 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.155950 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.155956 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.155963 | orchestrator | 2025-11-11 00:53:43.155969 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-11 00:53:43.155976 | orchestrator | Tuesday 11 November 2025 00:46:14 +0000 (0:00:00.776) 0:02:46.467 ****** 2025-11-11 00:53:43.155982 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-11 00:53:43.155989 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-11 00:53:43.155995 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-11 00:53:43.156002 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-11-11 00:53:43.156008 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.156015 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-11-11 00:53:43.156021 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.156028 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-11-11 00:53:43.156034 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.156041 | orchestrator | 2025-11-11 00:53:43.156047 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-11-11 00:53:43.156054 | orchestrator | Tuesday 11 November 2025 00:46:15 +0000 (0:00:01.598) 0:02:48.066 ****** 2025-11-11 00:53:43.156060 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.156067 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.156073 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.156080 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.156086 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.156093 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.156099 | orchestrator | 2025-11-11 00:53:43.156106 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.156120 | orchestrator | Tuesday 11 November 2025 00:46:18 +0000 (0:00:02.219) 0:02:50.286 ****** 2025-11-11 00:53:43.156127 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.156133 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.156140 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.156146 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.156153 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.156159 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.156166 | orchestrator | 2025-11-11 00:53:43.156172 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-11 00:53:43.156179 | orchestrator | Tuesday 11 November 2025 00:46:19 +0000 (0:00:01.122) 0:02:51.409 ****** 2025-11-11 00:53:43.156186 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156192 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.156199 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.156205 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.156212 | orchestrator | 2025-11-11 00:53:43.156219 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-11 00:53:43.156229 | orchestrator | Tuesday 11 November 2025 00:46:20 +0000 (0:00:00.779) 0:02:52.188 ****** 2025-11-11 00:53:43.156236 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.156243 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.156249 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.156256 | orchestrator | 2025-11-11 00:53:43.156262 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-11 00:53:43.156269 | orchestrator | Tuesday 11 November 2025 00:46:20 +0000 (0:00:00.325) 0:02:52.513 ****** 2025-11-11 00:53:43.156276 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.156282 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.156289 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.156295 | orchestrator | 2025-11-11 00:53:43.156302 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-11 00:53:43.156308 | orchestrator | Tuesday 11 November 2025 00:46:21 +0000 (0:00:01.423) 0:02:53.937 ****** 2025-11-11 00:53:43.156315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:53:43.156321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:53:43.156328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:53:43.156334 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.156341 | orchestrator | 2025-11-11 00:53:43.156347 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-11 00:53:43.156354 | orchestrator | Tuesday 11 November 2025 00:46:22 +0000 (0:00:00.621) 0:02:54.558 ****** 2025-11-11 00:53:43.156360 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.156367 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.156374 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.156380 | orchestrator | 2025-11-11 00:53:43.156387 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-11 00:53:43.156428 | orchestrator | Tuesday 11 November 2025 00:46:22 +0000 (0:00:00.295) 0:02:54.853 ****** 2025-11-11 00:53:43.156435 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.156442 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.156448 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.156455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.156462 | orchestrator | 2025-11-11 00:53:43.156468 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-11 00:53:43.156475 | orchestrator | Tuesday 11 November 2025 00:46:23 +0000 (0:00:00.949) 0:02:55.802 ****** 2025-11-11 00:53:43.156481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.156492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.156504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.156511 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156517 | orchestrator | 2025-11-11 00:53:43.156524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-11 00:53:43.156531 | orchestrator | Tuesday 11 November 2025 00:46:24 +0000 (0:00:00.369) 0:02:56.172 ****** 2025-11-11 00:53:43.156537 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156544 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.156550 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.156557 | orchestrator | 2025-11-11 00:53:43.156563 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-11 00:53:43.156570 | orchestrator | Tuesday 11 November 2025 00:46:24 +0000 (0:00:00.292) 0:02:56.464 ****** 2025-11-11 00:53:43.156576 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156583 | orchestrator | 2025-11-11 00:53:43.156590 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-11 00:53:43.156596 | orchestrator | Tuesday 11 November 2025 00:46:24 +0000 (0:00:00.219) 0:02:56.684 ****** 2025-11-11 00:53:43.156603 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156609 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.156615 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.156622 | orchestrator | 2025-11-11 00:53:43.156628 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-11 00:53:43.156635 | orchestrator | Tuesday 11 November 2025 00:46:24 +0000 (0:00:00.281) 0:02:56.965 ****** 2025-11-11 00:53:43.156642 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156648 | orchestrator | 2025-11-11 00:53:43.156655 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-11 00:53:43.156661 | orchestrator | Tuesday 11 November 2025 00:46:25 +0000 (0:00:00.204) 0:02:57.170 ****** 2025-11-11 00:53:43.156668 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156674 | orchestrator | 2025-11-11 00:53:43.156681 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-11 00:53:43.156688 | orchestrator | Tuesday 11 November 2025 00:46:25 +0000 (0:00:00.662) 0:02:57.832 ****** 2025-11-11 00:53:43.156694 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156701 | orchestrator | 2025-11-11 00:53:43.156707 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-11 00:53:43.156714 | orchestrator | Tuesday 11 November 2025 00:46:25 +0000 (0:00:00.129) 0:02:57.962 ****** 2025-11-11 00:53:43.156720 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156727 | orchestrator | 2025-11-11 00:53:43.156733 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-11 00:53:43.156740 | orchestrator | Tuesday 11 November 2025 00:46:26 +0000 (0:00:00.213) 0:02:58.176 ****** 2025-11-11 00:53:43.156746 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156753 | orchestrator | 2025-11-11 00:53:43.156760 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-11 00:53:43.156766 | orchestrator | Tuesday 11 November 2025 00:46:26 +0000 (0:00:00.240) 0:02:58.417 ****** 2025-11-11 00:53:43.156773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.156779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.156786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.156793 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156799 | orchestrator | 2025-11-11 00:53:43.156806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-11 00:53:43.156816 | orchestrator | Tuesday 11 November 2025 00:46:26 +0000 (0:00:00.394) 0:02:58.812 ****** 2025-11-11 00:53:43.156823 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156830 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.156836 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.156843 | orchestrator | 2025-11-11 00:53:43.156850 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-11 00:53:43.156861 | orchestrator | Tuesday 11 November 2025 00:46:26 +0000 (0:00:00.299) 0:02:59.112 ****** 2025-11-11 00:53:43.156868 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156874 | orchestrator | 2025-11-11 00:53:43.156881 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-11 00:53:43.156888 | orchestrator | Tuesday 11 November 2025 00:46:27 +0000 (0:00:00.220) 0:02:59.332 ****** 2025-11-11 00:53:43.156894 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.156901 | orchestrator | 2025-11-11 00:53:43.156907 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-11 00:53:43.156914 | orchestrator | Tuesday 11 November 2025 00:46:27 +0000 (0:00:00.210) 0:02:59.543 ****** 2025-11-11 00:53:43.156920 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.156927 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.156933 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.156940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.156947 | orchestrator | 2025-11-11 00:53:43.156954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-11 00:53:43.156960 | orchestrator | Tuesday 11 November 2025 00:46:28 +0000 (0:00:00.967) 0:03:00.511 ****** 2025-11-11 00:53:43.156966 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.156972 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.156978 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.156984 | orchestrator | 2025-11-11 00:53:43.156990 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-11 00:53:43.156996 | orchestrator | Tuesday 11 November 2025 00:46:28 +0000 (0:00:00.292) 0:03:00.804 ****** 2025-11-11 00:53:43.157002 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.157009 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.157015 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.157021 | orchestrator | 2025-11-11 00:53:43.157027 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-11 00:53:43.157033 | orchestrator | Tuesday 11 November 2025 00:46:30 +0000 (0:00:01.339) 0:03:02.143 ****** 2025-11-11 00:53:43.157044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.157051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.157057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.157063 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.157069 | orchestrator | 2025-11-11 00:53:43.157075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-11 00:53:43.157081 | orchestrator | Tuesday 11 November 2025 00:46:30 +0000 (0:00:00.637) 0:03:02.781 ****** 2025-11-11 00:53:43.157087 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.157093 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.157099 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.157106 | orchestrator | 2025-11-11 00:53:43.157112 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-11 00:53:43.157118 | orchestrator | Tuesday 11 November 2025 00:46:31 +0000 (0:00:00.335) 0:03:03.117 ****** 2025-11-11 00:53:43.157124 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157130 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157136 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.157148 | orchestrator | 2025-11-11 00:53:43.157154 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-11 00:53:43.157161 | orchestrator | Tuesday 11 November 2025 00:46:31 +0000 (0:00:00.935) 0:03:04.053 ****** 2025-11-11 00:53:43.157167 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.157173 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.157179 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.157189 | orchestrator | 2025-11-11 00:53:43.157195 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-11 00:53:43.157201 | orchestrator | Tuesday 11 November 2025 00:46:32 +0000 (0:00:00.323) 0:03:04.377 ****** 2025-11-11 00:53:43.157208 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.157214 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.157220 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.157226 | orchestrator | 2025-11-11 00:53:43.157232 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-11 00:53:43.157238 | orchestrator | Tuesday 11 November 2025 00:46:33 +0000 (0:00:01.187) 0:03:05.564 ****** 2025-11-11 00:53:43.157244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.157250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.157256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.157262 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.157268 | orchestrator | 2025-11-11 00:53:43.157274 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-11 00:53:43.157281 | orchestrator | Tuesday 11 November 2025 00:46:34 +0000 (0:00:00.771) 0:03:06.336 ****** 2025-11-11 00:53:43.157287 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.157293 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.157299 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.157305 | orchestrator | 2025-11-11 00:53:43.157311 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-11-11 00:53:43.157317 | orchestrator | Tuesday 11 November 2025 00:46:34 +0000 (0:00:00.296) 0:03:06.633 ****** 2025-11-11 00:53:43.157323 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.157329 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.157336 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.157342 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157348 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157357 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157364 | orchestrator | 2025-11-11 00:53:43.157370 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-11 00:53:43.157376 | orchestrator | Tuesday 11 November 2025 00:46:35 +0000 (0:00:00.769) 0:03:07.402 ****** 2025-11-11 00:53:43.157382 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.157388 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.157406 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.157412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.157418 | orchestrator | 2025-11-11 00:53:43.157424 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-11 00:53:43.157430 | orchestrator | Tuesday 11 November 2025 00:46:36 +0000 (0:00:00.973) 0:03:08.376 ****** 2025-11-11 00:53:43.157436 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157443 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157449 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157455 | orchestrator | 2025-11-11 00:53:43.157461 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-11 00:53:43.157467 | orchestrator | Tuesday 11 November 2025 00:46:36 +0000 (0:00:00.308) 0:03:08.684 ****** 2025-11-11 00:53:43.157473 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.157479 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.157485 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.157491 | orchestrator | 2025-11-11 00:53:43.157497 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-11 00:53:43.157503 | orchestrator | Tuesday 11 November 2025 00:46:37 +0000 (0:00:01.193) 0:03:09.877 ****** 2025-11-11 00:53:43.157509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:53:43.157515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:53:43.157521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:53:43.157532 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157538 | orchestrator | 2025-11-11 00:53:43.157544 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-11 00:53:43.157550 | orchestrator | Tuesday 11 November 2025 00:46:38 +0000 (0:00:00.802) 0:03:10.680 ****** 2025-11-11 00:53:43.157556 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157562 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157568 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157574 | orchestrator | 2025-11-11 00:53:43.157580 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-11-11 00:53:43.157586 | orchestrator | 2025-11-11 00:53:43.157596 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.157602 | orchestrator | Tuesday 11 November 2025 00:46:39 +0000 (0:00:00.766) 0:03:11.447 ****** 2025-11-11 00:53:43.157608 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.157615 | orchestrator | 2025-11-11 00:53:43.157621 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.157627 | orchestrator | Tuesday 11 November 2025 00:46:39 +0000 (0:00:00.491) 0:03:11.938 ****** 2025-11-11 00:53:43.157633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.157639 | orchestrator | 2025-11-11 00:53:43.157645 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.157651 | orchestrator | Tuesday 11 November 2025 00:46:40 +0000 (0:00:00.662) 0:03:12.601 ****** 2025-11-11 00:53:43.157657 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157663 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157669 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157675 | orchestrator | 2025-11-11 00:53:43.157681 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.157687 | orchestrator | Tuesday 11 November 2025 00:46:41 +0000 (0:00:00.747) 0:03:13.348 ****** 2025-11-11 00:53:43.157693 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157700 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157706 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157712 | orchestrator | 2025-11-11 00:53:43.157718 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.157724 | orchestrator | Tuesday 11 November 2025 00:46:41 +0000 (0:00:00.320) 0:03:13.669 ****** 2025-11-11 00:53:43.157730 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157736 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157742 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157748 | orchestrator | 2025-11-11 00:53:43.157754 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.157760 | orchestrator | Tuesday 11 November 2025 00:46:41 +0000 (0:00:00.275) 0:03:13.945 ****** 2025-11-11 00:53:43.157766 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157772 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157778 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157784 | orchestrator | 2025-11-11 00:53:43.157790 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.157796 | orchestrator | Tuesday 11 November 2025 00:46:42 +0000 (0:00:00.292) 0:03:14.237 ****** 2025-11-11 00:53:43.157802 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157808 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157814 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157821 | orchestrator | 2025-11-11 00:53:43.157827 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.157833 | orchestrator | Tuesday 11 November 2025 00:46:43 +0000 (0:00:00.977) 0:03:15.215 ****** 2025-11-11 00:53:43.157839 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157845 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157855 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157862 | orchestrator | 2025-11-11 00:53:43.157868 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.157874 | orchestrator | Tuesday 11 November 2025 00:46:43 +0000 (0:00:00.300) 0:03:15.515 ****** 2025-11-11 00:53:43.157884 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.157890 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.157896 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.157902 | orchestrator | 2025-11-11 00:53:43.157908 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.157914 | orchestrator | Tuesday 11 November 2025 00:46:43 +0000 (0:00:00.284) 0:03:15.800 ****** 2025-11-11 00:53:43.157920 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157927 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157933 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157939 | orchestrator | 2025-11-11 00:53:43.157945 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.157951 | orchestrator | Tuesday 11 November 2025 00:46:44 +0000 (0:00:00.711) 0:03:16.512 ****** 2025-11-11 00:53:43.157957 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.157963 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.157969 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.157975 | orchestrator | 2025-11-11 00:53:43.157981 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.157987 | orchestrator | Tuesday 11 November 2025 00:46:45 +0000 (0:00:00.961) 0:03:17.474 ****** 2025-11-11 00:53:43.157994 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158000 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158006 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158012 | orchestrator | 2025-11-11 00:53:43.158039 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.158047 | orchestrator | Tuesday 11 November 2025 00:46:45 +0000 (0:00:00.284) 0:03:17.759 ****** 2025-11-11 00:53:43.158054 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158060 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158066 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158072 | orchestrator | 2025-11-11 00:53:43.158078 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.158084 | orchestrator | Tuesday 11 November 2025 00:46:45 +0000 (0:00:00.314) 0:03:18.073 ****** 2025-11-11 00:53:43.158093 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158099 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158106 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158112 | orchestrator | 2025-11-11 00:53:43.158118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.158125 | orchestrator | Tuesday 11 November 2025 00:46:46 +0000 (0:00:00.296) 0:03:18.370 ****** 2025-11-11 00:53:43.158131 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158137 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158147 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158153 | orchestrator | 2025-11-11 00:53:43.158159 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.158165 | orchestrator | Tuesday 11 November 2025 00:46:46 +0000 (0:00:00.294) 0:03:18.664 ****** 2025-11-11 00:53:43.158171 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158177 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158183 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158189 | orchestrator | 2025-11-11 00:53:43.158196 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.158202 | orchestrator | Tuesday 11 November 2025 00:46:47 +0000 (0:00:00.535) 0:03:19.200 ****** 2025-11-11 00:53:43.158208 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158214 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158220 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158231 | orchestrator | 2025-11-11 00:53:43.158237 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.158243 | orchestrator | Tuesday 11 November 2025 00:46:47 +0000 (0:00:00.293) 0:03:19.494 ****** 2025-11-11 00:53:43.158249 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158255 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.158261 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.158267 | orchestrator | 2025-11-11 00:53:43.158273 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.158279 | orchestrator | Tuesday 11 November 2025 00:46:47 +0000 (0:00:00.278) 0:03:19.772 ****** 2025-11-11 00:53:43.158286 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158292 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158298 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158304 | orchestrator | 2025-11-11 00:53:43.158310 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.158316 | orchestrator | Tuesday 11 November 2025 00:46:47 +0000 (0:00:00.323) 0:03:20.095 ****** 2025-11-11 00:53:43.158322 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158328 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158334 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158340 | orchestrator | 2025-11-11 00:53:43.158347 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.158353 | orchestrator | Tuesday 11 November 2025 00:46:48 +0000 (0:00:00.543) 0:03:20.639 ****** 2025-11-11 00:53:43.158359 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158365 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158371 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158377 | orchestrator | 2025-11-11 00:53:43.158383 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-11-11 00:53:43.158389 | orchestrator | Tuesday 11 November 2025 00:46:49 +0000 (0:00:00.526) 0:03:21.165 ****** 2025-11-11 00:53:43.158423 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158430 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158436 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158442 | orchestrator | 2025-11-11 00:53:43.158448 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-11-11 00:53:43.158454 | orchestrator | Tuesday 11 November 2025 00:46:49 +0000 (0:00:00.317) 0:03:21.483 ****** 2025-11-11 00:53:43.158460 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.158466 | orchestrator | 2025-11-11 00:53:43.158473 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-11-11 00:53:43.158479 | orchestrator | Tuesday 11 November 2025 00:46:50 +0000 (0:00:00.709) 0:03:22.192 ****** 2025-11-11 00:53:43.158485 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.158491 | orchestrator | 2025-11-11 00:53:43.158510 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-11-11 00:53:43.158516 | orchestrator | Tuesday 11 November 2025 00:46:50 +0000 (0:00:00.157) 0:03:22.349 ****** 2025-11-11 00:53:43.158522 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-11 00:53:43.158528 | orchestrator | 2025-11-11 00:53:43.158534 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-11-11 00:53:43.158540 | orchestrator | Tuesday 11 November 2025 00:46:51 +0000 (0:00:00.957) 0:03:23.307 ****** 2025-11-11 00:53:43.158546 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158553 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158559 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158565 | orchestrator | 2025-11-11 00:53:43.158571 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-11-11 00:53:43.158577 | orchestrator | Tuesday 11 November 2025 00:46:51 +0000 (0:00:00.313) 0:03:23.620 ****** 2025-11-11 00:53:43.158583 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158589 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158595 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158605 | orchestrator | 2025-11-11 00:53:43.158612 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-11-11 00:53:43.158618 | orchestrator | Tuesday 11 November 2025 00:46:51 +0000 (0:00:00.331) 0:03:23.952 ****** 2025-11-11 00:53:43.158624 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.158630 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.158636 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.158642 | orchestrator | 2025-11-11 00:53:43.158648 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-11-11 00:53:43.158655 | orchestrator | Tuesday 11 November 2025 00:46:54 +0000 (0:00:02.302) 0:03:26.254 ****** 2025-11-11 00:53:43.158661 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.158667 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.158673 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.158679 | orchestrator | 2025-11-11 00:53:43.158685 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-11-11 00:53:43.158691 | orchestrator | Tuesday 11 November 2025 00:46:54 +0000 (0:00:00.766) 0:03:27.020 ****** 2025-11-11 00:53:43.158697 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.158703 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.158709 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.158715 | orchestrator | 2025-11-11 00:53:43.158721 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-11-11 00:53:43.158728 | orchestrator | Tuesday 11 November 2025 00:46:55 +0000 (0:00:00.730) 0:03:27.751 ****** 2025-11-11 00:53:43.158737 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158744 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158750 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158756 | orchestrator | 2025-11-11 00:53:43.158762 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-11-11 00:53:43.158768 | orchestrator | Tuesday 11 November 2025 00:46:56 +0000 (0:00:00.659) 0:03:28.411 ****** 2025-11-11 00:53:43.158774 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.158780 | orchestrator | 2025-11-11 00:53:43.158786 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-11-11 00:53:43.158793 | orchestrator | Tuesday 11 November 2025 00:46:57 +0000 (0:00:01.654) 0:03:30.065 ****** 2025-11-11 00:53:43.158799 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158805 | orchestrator | 2025-11-11 00:53:43.158811 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-11-11 00:53:43.158817 | orchestrator | Tuesday 11 November 2025 00:46:58 +0000 (0:00:00.904) 0:03:30.970 ****** 2025-11-11 00:53:43.158823 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.158829 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.158835 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.158841 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-11 00:53:43.158847 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:53:43.158853 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:53:43.158859 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:53:43.158865 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-11-11 00:53:43.158871 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:53:43.158877 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-11-11 00:53:43.158884 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-11-11 00:53:43.158890 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-11-11 00:53:43.158896 | orchestrator | 2025-11-11 00:53:43.158902 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-11-11 00:53:43.158908 | orchestrator | Tuesday 11 November 2025 00:47:02 +0000 (0:00:03.356) 0:03:34.327 ****** 2025-11-11 00:53:43.158914 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.158925 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.158931 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.158937 | orchestrator | 2025-11-11 00:53:43.158943 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-11-11 00:53:43.158949 | orchestrator | Tuesday 11 November 2025 00:47:03 +0000 (0:00:01.239) 0:03:35.566 ****** 2025-11-11 00:53:43.158955 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158961 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.158967 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.158973 | orchestrator | 2025-11-11 00:53:43.158980 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-11-11 00:53:43.158985 | orchestrator | Tuesday 11 November 2025 00:47:03 +0000 (0:00:00.309) 0:03:35.875 ****** 2025-11-11 00:53:43.158990 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.158996 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159001 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159006 | orchestrator | 2025-11-11 00:53:43.159012 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-11-11 00:53:43.159017 | orchestrator | Tuesday 11 November 2025 00:47:04 +0000 (0:00:00.309) 0:03:36.185 ****** 2025-11-11 00:53:43.159022 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159032 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159037 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159042 | orchestrator | 2025-11-11 00:53:43.159048 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-11-11 00:53:43.159053 | orchestrator | Tuesday 11 November 2025 00:47:05 +0000 (0:00:01.658) 0:03:37.844 ****** 2025-11-11 00:53:43.159058 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159064 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159069 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159074 | orchestrator | 2025-11-11 00:53:43.159079 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-11-11 00:53:43.159085 | orchestrator | Tuesday 11 November 2025 00:47:07 +0000 (0:00:01.321) 0:03:39.166 ****** 2025-11-11 00:53:43.159090 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159095 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159101 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159106 | orchestrator | 2025-11-11 00:53:43.159111 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-11-11 00:53:43.159117 | orchestrator | Tuesday 11 November 2025 00:47:07 +0000 (0:00:00.297) 0:03:39.463 ****** 2025-11-11 00:53:43.159122 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159127 | orchestrator | 2025-11-11 00:53:43.159133 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-11-11 00:53:43.159138 | orchestrator | Tuesday 11 November 2025 00:47:08 +0000 (0:00:00.665) 0:03:40.129 ****** 2025-11-11 00:53:43.159143 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159148 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159154 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159159 | orchestrator | 2025-11-11 00:53:43.159164 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-11-11 00:53:43.159170 | orchestrator | Tuesday 11 November 2025 00:47:08 +0000 (0:00:00.328) 0:03:40.457 ****** 2025-11-11 00:53:43.159175 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159180 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159185 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159191 | orchestrator | 2025-11-11 00:53:43.159196 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-11-11 00:53:43.159201 | orchestrator | Tuesday 11 November 2025 00:47:08 +0000 (0:00:00.314) 0:03:40.771 ****** 2025-11-11 00:53:43.159210 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159216 | orchestrator | 2025-11-11 00:53:43.159226 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-11-11 00:53:43.159231 | orchestrator | Tuesday 11 November 2025 00:47:09 +0000 (0:00:00.526) 0:03:41.298 ****** 2025-11-11 00:53:43.159236 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159242 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159247 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159252 | orchestrator | 2025-11-11 00:53:43.159258 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-11-11 00:53:43.159263 | orchestrator | Tuesday 11 November 2025 00:47:10 +0000 (0:00:01.770) 0:03:43.068 ****** 2025-11-11 00:53:43.159268 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159274 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159279 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159284 | orchestrator | 2025-11-11 00:53:43.159290 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-11-11 00:53:43.159295 | orchestrator | Tuesday 11 November 2025 00:47:12 +0000 (0:00:01.143) 0:03:44.212 ****** 2025-11-11 00:53:43.159300 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159305 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159311 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159316 | orchestrator | 2025-11-11 00:53:43.159321 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-11-11 00:53:43.159327 | orchestrator | Tuesday 11 November 2025 00:47:13 +0000 (0:00:01.701) 0:03:45.914 ****** 2025-11-11 00:53:43.159332 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.159338 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.159343 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.159348 | orchestrator | 2025-11-11 00:53:43.159353 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-11-11 00:53:43.159359 | orchestrator | Tuesday 11 November 2025 00:47:15 +0000 (0:00:01.877) 0:03:47.791 ****** 2025-11-11 00:53:43.159364 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159369 | orchestrator | 2025-11-11 00:53:43.159375 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-11-11 00:53:43.159380 | orchestrator | Tuesday 11 November 2025 00:47:16 +0000 (0:00:00.720) 0:03:48.512 ****** 2025-11-11 00:53:43.159385 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-11-11 00:53:43.159401 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159407 | orchestrator | 2025-11-11 00:53:43.159412 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-11-11 00:53:43.159417 | orchestrator | Tuesday 11 November 2025 00:47:38 +0000 (0:00:21.888) 0:04:10.401 ****** 2025-11-11 00:53:43.159423 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159428 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159433 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159439 | orchestrator | 2025-11-11 00:53:43.159444 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-11-11 00:53:43.159449 | orchestrator | Tuesday 11 November 2025 00:47:47 +0000 (0:00:09.027) 0:04:19.428 ****** 2025-11-11 00:53:43.159455 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159460 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159465 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159471 | orchestrator | 2025-11-11 00:53:43.159476 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-11-11 00:53:43.159485 | orchestrator | Tuesday 11 November 2025 00:47:47 +0000 (0:00:00.282) 0:04:19.711 ****** 2025-11-11 00:53:43.159493 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-11-11 00:53:43.159505 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-11-11 00:53:43.159512 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-11-11 00:53:43.159519 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-11-11 00:53:43.159528 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-11-11 00:53:43.159539 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cf6b5ba5bf57e8a80ff445f25d80686813127301'}])  2025-11-11 00:53:43.159547 | orchestrator | 2025-11-11 00:53:43.159552 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.159557 | orchestrator | Tuesday 11 November 2025 00:48:02 +0000 (0:00:14.674) 0:04:34.386 ****** 2025-11-11 00:53:43.159563 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159568 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159573 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159579 | orchestrator | 2025-11-11 00:53:43.159584 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-11 00:53:43.159589 | orchestrator | Tuesday 11 November 2025 00:48:02 +0000 (0:00:00.329) 0:04:34.715 ****** 2025-11-11 00:53:43.159594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159600 | orchestrator | 2025-11-11 00:53:43.159605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-11 00:53:43.159610 | orchestrator | Tuesday 11 November 2025 00:48:03 +0000 (0:00:00.694) 0:04:35.410 ****** 2025-11-11 00:53:43.159616 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159621 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159626 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159631 | orchestrator | 2025-11-11 00:53:43.159637 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-11 00:53:43.159642 | orchestrator | Tuesday 11 November 2025 00:48:03 +0000 (0:00:00.313) 0:04:35.723 ****** 2025-11-11 00:53:43.159647 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159653 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159658 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159663 | orchestrator | 2025-11-11 00:53:43.159669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-11 00:53:43.159674 | orchestrator | Tuesday 11 November 2025 00:48:03 +0000 (0:00:00.314) 0:04:36.037 ****** 2025-11-11 00:53:43.159683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:53:43.159689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:53:43.159694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:53:43.159699 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159705 | orchestrator | 2025-11-11 00:53:43.159710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-11 00:53:43.159715 | orchestrator | Tuesday 11 November 2025 00:48:04 +0000 (0:00:00.810) 0:04:36.848 ****** 2025-11-11 00:53:43.159721 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159726 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159735 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159740 | orchestrator | 2025-11-11 00:53:43.159746 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-11-11 00:53:43.159751 | orchestrator | 2025-11-11 00:53:43.159756 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.159762 | orchestrator | Tuesday 11 November 2025 00:48:05 +0000 (0:00:00.718) 0:04:37.566 ****** 2025-11-11 00:53:43.159767 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159773 | orchestrator | 2025-11-11 00:53:43.159778 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.159783 | orchestrator | Tuesday 11 November 2025 00:48:05 +0000 (0:00:00.485) 0:04:38.052 ****** 2025-11-11 00:53:43.159788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.159794 | orchestrator | 2025-11-11 00:53:43.159799 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.159804 | orchestrator | Tuesday 11 November 2025 00:48:06 +0000 (0:00:00.651) 0:04:38.704 ****** 2025-11-11 00:53:43.159810 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159815 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159820 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159826 | orchestrator | 2025-11-11 00:53:43.159831 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.159836 | orchestrator | Tuesday 11 November 2025 00:48:07 +0000 (0:00:00.704) 0:04:39.409 ****** 2025-11-11 00:53:43.159842 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159847 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159852 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159857 | orchestrator | 2025-11-11 00:53:43.159863 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.159868 | orchestrator | Tuesday 11 November 2025 00:48:07 +0000 (0:00:00.295) 0:04:39.705 ****** 2025-11-11 00:53:43.159873 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159879 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159884 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159889 | orchestrator | 2025-11-11 00:53:43.159895 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.159900 | orchestrator | Tuesday 11 November 2025 00:48:07 +0000 (0:00:00.297) 0:04:40.003 ****** 2025-11-11 00:53:43.159911 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159916 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159922 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159927 | orchestrator | 2025-11-11 00:53:43.159932 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.159938 | orchestrator | Tuesday 11 November 2025 00:48:08 +0000 (0:00:00.470) 0:04:40.473 ****** 2025-11-11 00:53:43.159943 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.159949 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.159954 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.159959 | orchestrator | 2025-11-11 00:53:43.159965 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.159974 | orchestrator | Tuesday 11 November 2025 00:48:09 +0000 (0:00:00.745) 0:04:41.219 ****** 2025-11-11 00:53:43.159979 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.159984 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.159990 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.159995 | orchestrator | 2025-11-11 00:53:43.160000 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.160005 | orchestrator | Tuesday 11 November 2025 00:48:09 +0000 (0:00:00.290) 0:04:41.510 ****** 2025-11-11 00:53:43.160011 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160016 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160021 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160026 | orchestrator | 2025-11-11 00:53:43.160032 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.160037 | orchestrator | Tuesday 11 November 2025 00:48:09 +0000 (0:00:00.296) 0:04:41.806 ****** 2025-11-11 00:53:43.160042 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160048 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160053 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160058 | orchestrator | 2025-11-11 00:53:43.160064 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.160069 | orchestrator | Tuesday 11 November 2025 00:48:10 +0000 (0:00:00.683) 0:04:42.490 ****** 2025-11-11 00:53:43.160074 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160080 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160085 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160090 | orchestrator | 2025-11-11 00:53:43.160095 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.160101 | orchestrator | Tuesday 11 November 2025 00:48:11 +0000 (0:00:01.043) 0:04:43.534 ****** 2025-11-11 00:53:43.160106 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160111 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160117 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160122 | orchestrator | 2025-11-11 00:53:43.160127 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.160132 | orchestrator | Tuesday 11 November 2025 00:48:11 +0000 (0:00:00.303) 0:04:43.837 ****** 2025-11-11 00:53:43.160138 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160143 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160148 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160154 | orchestrator | 2025-11-11 00:53:43.160159 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.160164 | orchestrator | Tuesday 11 November 2025 00:48:12 +0000 (0:00:00.368) 0:04:44.206 ****** 2025-11-11 00:53:43.160170 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160175 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160180 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160185 | orchestrator | 2025-11-11 00:53:43.160191 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.160199 | orchestrator | Tuesday 11 November 2025 00:48:12 +0000 (0:00:00.290) 0:04:44.496 ****** 2025-11-11 00:53:43.160205 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160210 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160215 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160220 | orchestrator | 2025-11-11 00:53:43.160226 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.160231 | orchestrator | Tuesday 11 November 2025 00:48:12 +0000 (0:00:00.507) 0:04:45.004 ****** 2025-11-11 00:53:43.160236 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160242 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160247 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160252 | orchestrator | 2025-11-11 00:53:43.160257 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.160263 | orchestrator | Tuesday 11 November 2025 00:48:13 +0000 (0:00:00.284) 0:04:45.288 ****** 2025-11-11 00:53:43.160272 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160277 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160282 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160288 | orchestrator | 2025-11-11 00:53:43.160293 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.160298 | orchestrator | Tuesday 11 November 2025 00:48:13 +0000 (0:00:00.300) 0:04:45.589 ****** 2025-11-11 00:53:43.160304 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160309 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160314 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160319 | orchestrator | 2025-11-11 00:53:43.160325 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.160331 | orchestrator | Tuesday 11 November 2025 00:48:13 +0000 (0:00:00.279) 0:04:45.868 ****** 2025-11-11 00:53:43.160340 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160349 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160357 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160365 | orchestrator | 2025-11-11 00:53:43.160373 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.160381 | orchestrator | Tuesday 11 November 2025 00:48:14 +0000 (0:00:00.514) 0:04:46.383 ****** 2025-11-11 00:53:43.160422 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160433 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160442 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160450 | orchestrator | 2025-11-11 00:53:43.160459 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.160468 | orchestrator | Tuesday 11 November 2025 00:48:14 +0000 (0:00:00.324) 0:04:46.707 ****** 2025-11-11 00:53:43.160477 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160489 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160498 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160508 | orchestrator | 2025-11-11 00:53:43.160513 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-11-11 00:53:43.160519 | orchestrator | Tuesday 11 November 2025 00:48:15 +0000 (0:00:00.505) 0:04:47.213 ****** 2025-11-11 00:53:43.160524 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-11 00:53:43.160530 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.160535 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.160540 | orchestrator | 2025-11-11 00:53:43.160546 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-11-11 00:53:43.160551 | orchestrator | Tuesday 11 November 2025 00:48:15 +0000 (0:00:00.797) 0:04:48.010 ****** 2025-11-11 00:53:43.160556 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.160561 | orchestrator | 2025-11-11 00:53:43.160567 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-11-11 00:53:43.160572 | orchestrator | Tuesday 11 November 2025 00:48:16 +0000 (0:00:00.813) 0:04:48.824 ****** 2025-11-11 00:53:43.160578 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.160583 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.160588 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.160593 | orchestrator | 2025-11-11 00:53:43.160599 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-11-11 00:53:43.160604 | orchestrator | Tuesday 11 November 2025 00:48:17 +0000 (0:00:00.667) 0:04:49.492 ****** 2025-11-11 00:53:43.160609 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160614 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160620 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160625 | orchestrator | 2025-11-11 00:53:43.160630 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-11-11 00:53:43.160635 | orchestrator | Tuesday 11 November 2025 00:48:17 +0000 (0:00:00.326) 0:04:49.818 ****** 2025-11-11 00:53:43.160646 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.160652 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.160657 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.160662 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-11-11 00:53:43.160667 | orchestrator | 2025-11-11 00:53:43.160673 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-11-11 00:53:43.160678 | orchestrator | Tuesday 11 November 2025 00:48:27 +0000 (0:00:10.290) 0:05:00.109 ****** 2025-11-11 00:53:43.160683 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160688 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160694 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160699 | orchestrator | 2025-11-11 00:53:43.160704 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-11-11 00:53:43.160710 | orchestrator | Tuesday 11 November 2025 00:48:28 +0000 (0:00:00.542) 0:05:00.652 ****** 2025-11-11 00:53:43.160715 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-11 00:53:43.160720 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-11 00:53:43.160726 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-11 00:53:43.160731 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.160736 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.160746 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.160752 | orchestrator | 2025-11-11 00:53:43.160757 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-11-11 00:53:43.160763 | orchestrator | Tuesday 11 November 2025 00:48:30 +0000 (0:00:02.095) 0:05:02.747 ****** 2025-11-11 00:53:43.160768 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-11 00:53:43.160773 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-11 00:53:43.160779 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-11 00:53:43.160784 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-11 00:53:43.160789 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-11 00:53:43.160795 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-11 00:53:43.160800 | orchestrator | 2025-11-11 00:53:43.160805 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-11-11 00:53:43.160811 | orchestrator | Tuesday 11 November 2025 00:48:31 +0000 (0:00:01.213) 0:05:03.960 ****** 2025-11-11 00:53:43.160816 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.160821 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.160826 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.160832 | orchestrator | 2025-11-11 00:53:43.160837 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-11-11 00:53:43.160842 | orchestrator | Tuesday 11 November 2025 00:48:32 +0000 (0:00:00.651) 0:05:04.612 ****** 2025-11-11 00:53:43.160848 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160853 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160858 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160864 | orchestrator | 2025-11-11 00:53:43.160869 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-11-11 00:53:43.160874 | orchestrator | Tuesday 11 November 2025 00:48:32 +0000 (0:00:00.296) 0:05:04.909 ****** 2025-11-11 00:53:43.160880 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160885 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160890 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160895 | orchestrator | 2025-11-11 00:53:43.160901 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-11-11 00:53:43.160906 | orchestrator | Tuesday 11 November 2025 00:48:33 +0000 (0:00:00.577) 0:05:05.487 ****** 2025-11-11 00:53:43.160912 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.160921 | orchestrator | 2025-11-11 00:53:43.160929 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-11-11 00:53:43.160934 | orchestrator | Tuesday 11 November 2025 00:48:33 +0000 (0:00:00.485) 0:05:05.972 ****** 2025-11-11 00:53:43.160939 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160944 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160948 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160953 | orchestrator | 2025-11-11 00:53:43.160958 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-11-11 00:53:43.160963 | orchestrator | Tuesday 11 November 2025 00:48:34 +0000 (0:00:00.305) 0:05:06.278 ****** 2025-11-11 00:53:43.160967 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.160972 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.160977 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.160982 | orchestrator | 2025-11-11 00:53:43.160986 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-11-11 00:53:43.160991 | orchestrator | Tuesday 11 November 2025 00:48:34 +0000 (0:00:00.534) 0:05:06.813 ****** 2025-11-11 00:53:43.160996 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.161000 | orchestrator | 2025-11-11 00:53:43.161005 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-11-11 00:53:43.161010 | orchestrator | Tuesday 11 November 2025 00:48:35 +0000 (0:00:00.501) 0:05:07.314 ****** 2025-11-11 00:53:43.161015 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161019 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161024 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161029 | orchestrator | 2025-11-11 00:53:43.161034 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-11-11 00:53:43.161038 | orchestrator | Tuesday 11 November 2025 00:48:36 +0000 (0:00:01.261) 0:05:08.576 ****** 2025-11-11 00:53:43.161043 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161048 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161052 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161057 | orchestrator | 2025-11-11 00:53:43.161062 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-11-11 00:53:43.161067 | orchestrator | Tuesday 11 November 2025 00:48:37 +0000 (0:00:01.376) 0:05:09.953 ****** 2025-11-11 00:53:43.161071 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161076 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161081 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161085 | orchestrator | 2025-11-11 00:53:43.161090 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-11-11 00:53:43.161095 | orchestrator | Tuesday 11 November 2025 00:48:39 +0000 (0:00:01.708) 0:05:11.661 ****** 2025-11-11 00:53:43.161100 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161104 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161109 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161114 | orchestrator | 2025-11-11 00:53:43.161118 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-11-11 00:53:43.161123 | orchestrator | Tuesday 11 November 2025 00:48:41 +0000 (0:00:01.893) 0:05:13.555 ****** 2025-11-11 00:53:43.161128 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.161132 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.161137 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-11-11 00:53:43.161142 | orchestrator | 2025-11-11 00:53:43.161147 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-11-11 00:53:43.161152 | orchestrator | Tuesday 11 November 2025 00:48:41 +0000 (0:00:00.397) 0:05:13.952 ****** 2025-11-11 00:53:43.161159 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-11-11 00:53:43.161164 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-11-11 00:53:43.161172 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-11-11 00:53:43.161177 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-11-11 00:53:43.161182 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.161186 | orchestrator | 2025-11-11 00:53:43.161191 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-11-11 00:53:43.161196 | orchestrator | Tuesday 11 November 2025 00:49:06 +0000 (0:00:24.483) 0:05:38.436 ****** 2025-11-11 00:53:43.161201 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.161206 | orchestrator | 2025-11-11 00:53:43.161210 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-11-11 00:53:43.161215 | orchestrator | Tuesday 11 November 2025 00:49:07 +0000 (0:00:01.612) 0:05:40.049 ****** 2025-11-11 00:53:43.161220 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.161225 | orchestrator | 2025-11-11 00:53:43.161229 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-11-11 00:53:43.161234 | orchestrator | Tuesday 11 November 2025 00:49:08 +0000 (0:00:00.317) 0:05:40.366 ****** 2025-11-11 00:53:43.161239 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.161244 | orchestrator | 2025-11-11 00:53:43.161249 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-11-11 00:53:43.161253 | orchestrator | Tuesday 11 November 2025 00:49:08 +0000 (0:00:00.159) 0:05:40.525 ****** 2025-11-11 00:53:43.161258 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-11-11 00:53:43.161263 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-11-11 00:53:43.161268 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-11-11 00:53:43.161272 | orchestrator | 2025-11-11 00:53:43.161277 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-11-11 00:53:43.161282 | orchestrator | Tuesday 11 November 2025 00:49:14 +0000 (0:00:06.415) 0:05:46.941 ****** 2025-11-11 00:53:43.161287 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-11-11 00:53:43.161291 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-11-11 00:53:43.161296 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-11-11 00:53:43.161301 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-11-11 00:53:43.161306 | orchestrator | 2025-11-11 00:53:43.161310 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.161315 | orchestrator | Tuesday 11 November 2025 00:49:19 +0000 (0:00:04.544) 0:05:51.485 ****** 2025-11-11 00:53:43.161320 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161325 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161344 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161349 | orchestrator | 2025-11-11 00:53:43.161354 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-11 00:53:43.161359 | orchestrator | Tuesday 11 November 2025 00:49:20 +0000 (0:00:00.839) 0:05:52.325 ****** 2025-11-11 00:53:43.161363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.161368 | orchestrator | 2025-11-11 00:53:43.161373 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-11 00:53:43.161378 | orchestrator | Tuesday 11 November 2025 00:49:20 +0000 (0:00:00.492) 0:05:52.818 ****** 2025-11-11 00:53:43.161382 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.161387 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.161404 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.161409 | orchestrator | 2025-11-11 00:53:43.161414 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-11 00:53:43.161419 | orchestrator | Tuesday 11 November 2025 00:49:21 +0000 (0:00:00.300) 0:05:53.118 ****** 2025-11-11 00:53:43.161429 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.161433 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.161438 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.161443 | orchestrator | 2025-11-11 00:53:43.161448 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-11 00:53:43.161453 | orchestrator | Tuesday 11 November 2025 00:49:22 +0000 (0:00:01.080) 0:05:54.198 ****** 2025-11-11 00:53:43.161458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-11 00:53:43.161464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-11 00:53:43.161472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-11 00:53:43.161480 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.161488 | orchestrator | 2025-11-11 00:53:43.161496 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-11 00:53:43.161503 | orchestrator | Tuesday 11 November 2025 00:49:23 +0000 (0:00:01.046) 0:05:55.245 ****** 2025-11-11 00:53:43.161511 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.161518 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.161526 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.161534 | orchestrator | 2025-11-11 00:53:43.161542 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-11-11 00:53:43.161550 | orchestrator | 2025-11-11 00:53:43.161555 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.161560 | orchestrator | Tuesday 11 November 2025 00:49:23 +0000 (0:00:00.540) 0:05:55.785 ****** 2025-11-11 00:53:43.161565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.161570 | orchestrator | 2025-11-11 00:53:43.161578 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.161583 | orchestrator | Tuesday 11 November 2025 00:49:24 +0000 (0:00:00.659) 0:05:56.445 ****** 2025-11-11 00:53:43.161588 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.161593 | orchestrator | 2025-11-11 00:53:43.161598 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.161602 | orchestrator | Tuesday 11 November 2025 00:49:24 +0000 (0:00:00.513) 0:05:56.958 ****** 2025-11-11 00:53:43.161607 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161612 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161616 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161621 | orchestrator | 2025-11-11 00:53:43.161626 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.161631 | orchestrator | Tuesday 11 November 2025 00:49:25 +0000 (0:00:00.290) 0:05:57.249 ****** 2025-11-11 00:53:43.161635 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161640 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161645 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161650 | orchestrator | 2025-11-11 00:53:43.161654 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.161659 | orchestrator | Tuesday 11 November 2025 00:49:25 +0000 (0:00:00.856) 0:05:58.105 ****** 2025-11-11 00:53:43.161664 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161669 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161673 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161678 | orchestrator | 2025-11-11 00:53:43.161683 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.161687 | orchestrator | Tuesday 11 November 2025 00:49:26 +0000 (0:00:00.714) 0:05:58.820 ****** 2025-11-11 00:53:43.161692 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161697 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161701 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161706 | orchestrator | 2025-11-11 00:53:43.161711 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.161721 | orchestrator | Tuesday 11 November 2025 00:49:27 +0000 (0:00:00.657) 0:05:59.477 ****** 2025-11-11 00:53:43.161726 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161730 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161735 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161740 | orchestrator | 2025-11-11 00:53:43.161745 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.161753 | orchestrator | Tuesday 11 November 2025 00:49:27 +0000 (0:00:00.292) 0:05:59.770 ****** 2025-11-11 00:53:43.161758 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161763 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161767 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161772 | orchestrator | 2025-11-11 00:53:43.161777 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.161781 | orchestrator | Tuesday 11 November 2025 00:49:28 +0000 (0:00:00.495) 0:06:00.266 ****** 2025-11-11 00:53:43.161786 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161791 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161796 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161800 | orchestrator | 2025-11-11 00:53:43.161805 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.161810 | orchestrator | Tuesday 11 November 2025 00:49:28 +0000 (0:00:00.317) 0:06:00.583 ****** 2025-11-11 00:53:43.161815 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161820 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161824 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161829 | orchestrator | 2025-11-11 00:53:43.161834 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.161839 | orchestrator | Tuesday 11 November 2025 00:49:29 +0000 (0:00:00.687) 0:06:01.271 ****** 2025-11-11 00:53:43.161843 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161848 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161853 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161857 | orchestrator | 2025-11-11 00:53:43.161862 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.161867 | orchestrator | Tuesday 11 November 2025 00:49:29 +0000 (0:00:00.698) 0:06:01.969 ****** 2025-11-11 00:53:43.161872 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161876 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161881 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161886 | orchestrator | 2025-11-11 00:53:43.161891 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.161895 | orchestrator | Tuesday 11 November 2025 00:49:30 +0000 (0:00:00.497) 0:06:02.467 ****** 2025-11-11 00:53:43.161900 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.161905 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.161910 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.161914 | orchestrator | 2025-11-11 00:53:43.161919 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.161924 | orchestrator | Tuesday 11 November 2025 00:49:30 +0000 (0:00:00.316) 0:06:02.783 ****** 2025-11-11 00:53:43.161929 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161933 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161938 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161943 | orchestrator | 2025-11-11 00:53:43.161947 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.161952 | orchestrator | Tuesday 11 November 2025 00:49:30 +0000 (0:00:00.310) 0:06:03.094 ****** 2025-11-11 00:53:43.161957 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161962 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161966 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.161971 | orchestrator | 2025-11-11 00:53:43.161976 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.161981 | orchestrator | Tuesday 11 November 2025 00:49:31 +0000 (0:00:00.325) 0:06:03.419 ****** 2025-11-11 00:53:43.161990 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.161995 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.161999 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162004 | orchestrator | 2025-11-11 00:53:43.162009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.162054 | orchestrator | Tuesday 11 November 2025 00:49:31 +0000 (0:00:00.518) 0:06:03.938 ****** 2025-11-11 00:53:43.162061 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162066 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162071 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162076 | orchestrator | 2025-11-11 00:53:43.162080 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.162086 | orchestrator | Tuesday 11 November 2025 00:49:32 +0000 (0:00:00.297) 0:06:04.235 ****** 2025-11-11 00:53:43.162091 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162096 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162100 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162105 | orchestrator | 2025-11-11 00:53:43.162110 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.162115 | orchestrator | Tuesday 11 November 2025 00:49:32 +0000 (0:00:00.278) 0:06:04.513 ****** 2025-11-11 00:53:43.162120 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162124 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162129 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162134 | orchestrator | 2025-11-11 00:53:43.162139 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.162143 | orchestrator | Tuesday 11 November 2025 00:49:32 +0000 (0:00:00.307) 0:06:04.820 ****** 2025-11-11 00:53:43.162148 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162153 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162158 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162162 | orchestrator | 2025-11-11 00:53:43.162167 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.162172 | orchestrator | Tuesday 11 November 2025 00:49:33 +0000 (0:00:00.534) 0:06:05.355 ****** 2025-11-11 00:53:43.162177 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162182 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162186 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162191 | orchestrator | 2025-11-11 00:53:43.162196 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-11-11 00:53:43.162201 | orchestrator | Tuesday 11 November 2025 00:49:33 +0000 (0:00:00.539) 0:06:05.895 ****** 2025-11-11 00:53:43.162205 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162210 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162215 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162220 | orchestrator | 2025-11-11 00:53:43.162224 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-11-11 00:53:43.162229 | orchestrator | Tuesday 11 November 2025 00:49:34 +0000 (0:00:00.306) 0:06:06.202 ****** 2025-11-11 00:53:43.162238 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:53:43.162243 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:53:43.162247 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:53:43.162252 | orchestrator | 2025-11-11 00:53:43.162257 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-11-11 00:53:43.162262 | orchestrator | Tuesday 11 November 2025 00:49:34 +0000 (0:00:00.802) 0:06:07.004 ****** 2025-11-11 00:53:43.162267 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.162271 | orchestrator | 2025-11-11 00:53:43.162276 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-11-11 00:53:43.162281 | orchestrator | Tuesday 11 November 2025 00:49:35 +0000 (0:00:00.709) 0:06:07.714 ****** 2025-11-11 00:53:43.162291 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162295 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162300 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162305 | orchestrator | 2025-11-11 00:53:43.162310 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-11-11 00:53:43.162314 | orchestrator | Tuesday 11 November 2025 00:49:35 +0000 (0:00:00.310) 0:06:08.025 ****** 2025-11-11 00:53:43.162319 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162324 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162329 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162333 | orchestrator | 2025-11-11 00:53:43.162338 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-11-11 00:53:43.162343 | orchestrator | Tuesday 11 November 2025 00:49:36 +0000 (0:00:00.288) 0:06:08.313 ****** 2025-11-11 00:53:43.162348 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162352 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162357 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162362 | orchestrator | 2025-11-11 00:53:43.162367 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-11-11 00:53:43.162371 | orchestrator | Tuesday 11 November 2025 00:49:37 +0000 (0:00:00.869) 0:06:09.183 ****** 2025-11-11 00:53:43.162376 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162381 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162385 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162426 | orchestrator | 2025-11-11 00:53:43.162433 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-11-11 00:53:43.162438 | orchestrator | Tuesday 11 November 2025 00:49:37 +0000 (0:00:00.317) 0:06:09.501 ****** 2025-11-11 00:53:43.162442 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-11 00:53:43.162447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-11 00:53:43.162452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-11 00:53:43.162457 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-11 00:53:43.162461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-11 00:53:43.162466 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-11 00:53:43.162475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-11 00:53:43.162480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-11 00:53:43.162484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-11 00:53:43.162489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-11 00:53:43.162494 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-11 00:53:43.162499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-11 00:53:43.162503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-11 00:53:43.162508 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-11 00:53:43.162513 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-11 00:53:43.162517 | orchestrator | 2025-11-11 00:53:43.162522 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-11-11 00:53:43.162527 | orchestrator | Tuesday 11 November 2025 00:49:39 +0000 (0:00:01.970) 0:06:11.472 ****** 2025-11-11 00:53:43.162531 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162536 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162541 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162552 | orchestrator | 2025-11-11 00:53:43.162557 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-11-11 00:53:43.162562 | orchestrator | Tuesday 11 November 2025 00:49:39 +0000 (0:00:00.295) 0:06:11.768 ****** 2025-11-11 00:53:43.162566 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.162571 | orchestrator | 2025-11-11 00:53:43.162576 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-11-11 00:53:43.162581 | orchestrator | Tuesday 11 November 2025 00:49:40 +0000 (0:00:00.735) 0:06:12.504 ****** 2025-11-11 00:53:43.162586 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-11 00:53:43.162590 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-11 00:53:43.162599 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-11 00:53:43.162604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-11-11 00:53:43.162608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-11-11 00:53:43.162613 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-11-11 00:53:43.162618 | orchestrator | 2025-11-11 00:53:43.162623 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-11-11 00:53:43.162627 | orchestrator | Tuesday 11 November 2025 00:49:41 +0000 (0:00:00.947) 0:06:13.451 ****** 2025-11-11 00:53:43.162632 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.162637 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.162641 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.162646 | orchestrator | 2025-11-11 00:53:43.162651 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-11-11 00:53:43.162656 | orchestrator | Tuesday 11 November 2025 00:49:43 +0000 (0:00:01.870) 0:06:15.321 ****** 2025-11-11 00:53:43.162660 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-11 00:53:43.162665 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.162670 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.162674 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-11 00:53:43.162679 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-11 00:53:43.162684 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.162689 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-11 00:53:43.162693 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-11 00:53:43.162698 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.162703 | orchestrator | 2025-11-11 00:53:43.162708 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-11-11 00:53:43.162712 | orchestrator | Tuesday 11 November 2025 00:49:44 +0000 (0:00:01.068) 0:06:16.390 ****** 2025-11-11 00:53:43.162717 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.162722 | orchestrator | 2025-11-11 00:53:43.162727 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-11-11 00:53:43.162731 | orchestrator | Tuesday 11 November 2025 00:49:46 +0000 (0:00:02.200) 0:06:18.590 ****** 2025-11-11 00:53:43.162736 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.162741 | orchestrator | 2025-11-11 00:53:43.162746 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-11-11 00:53:43.162750 | orchestrator | Tuesday 11 November 2025 00:49:47 +0000 (0:00:00.765) 0:06:19.356 ****** 2025-11-11 00:53:43.162755 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af11c135-cf10-5d68-b776-281fb5d39e8e', 'data_vg': 'ceph-af11c135-cf10-5d68-b776-281fb5d39e8e'}) 2025-11-11 00:53:43.162761 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8', 'data_vg': 'ceph-1efdad6c-d6bf-5a45-aa4b-bff5b179c7b8'}) 2025-11-11 00:53:43.162774 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01811ce3-d07c-5516-bfbb-fba58f4d4962', 'data_vg': 'ceph-01811ce3-d07c-5516-bfbb-fba58f4d4962'}) 2025-11-11 00:53:43.162787 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1515626-32f0-5abe-9383-a4f06f352cf6', 'data_vg': 'ceph-a1515626-32f0-5abe-9383-a4f06f352cf6'}) 2025-11-11 00:53:43.162796 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1fda84b1-4127-5701-96e6-fb2774ba2cbf', 'data_vg': 'ceph-1fda84b1-4127-5701-96e6-fb2774ba2cbf'}) 2025-11-11 00:53:43.162804 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2', 'data_vg': 'ceph-d28d894f-b2f1-5cbd-bb27-7fcd31d1cec2'}) 2025-11-11 00:53:43.162811 | orchestrator | 2025-11-11 00:53:43.162819 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-11-11 00:53:43.162827 | orchestrator | Tuesday 11 November 2025 00:50:30 +0000 (0:00:42.823) 0:07:02.179 ****** 2025-11-11 00:53:43.162835 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.162842 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.162850 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.162857 | orchestrator | 2025-11-11 00:53:43.162865 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-11-11 00:53:43.162872 | orchestrator | Tuesday 11 November 2025 00:50:30 +0000 (0:00:00.298) 0:07:02.477 ****** 2025-11-11 00:53:43.162877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.162881 | orchestrator | 2025-11-11 00:53:43.162886 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-11-11 00:53:43.162891 | orchestrator | Tuesday 11 November 2025 00:50:31 +0000 (0:00:00.786) 0:07:03.264 ****** 2025-11-11 00:53:43.162895 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162900 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162905 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162910 | orchestrator | 2025-11-11 00:53:43.162914 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-11-11 00:53:43.162919 | orchestrator | Tuesday 11 November 2025 00:50:31 +0000 (0:00:00.669) 0:07:03.934 ****** 2025-11-11 00:53:43.162924 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.162928 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.162933 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.162938 | orchestrator | 2025-11-11 00:53:43.162943 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-11-11 00:53:43.162947 | orchestrator | Tuesday 11 November 2025 00:50:34 +0000 (0:00:02.455) 0:07:06.389 ****** 2025-11-11 00:53:43.162956 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.162961 | orchestrator | 2025-11-11 00:53:43.162966 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-11-11 00:53:43.162970 | orchestrator | Tuesday 11 November 2025 00:50:35 +0000 (0:00:00.797) 0:07:07.187 ****** 2025-11-11 00:53:43.162974 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.162979 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.162984 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.162988 | orchestrator | 2025-11-11 00:53:43.162993 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-11-11 00:53:43.162997 | orchestrator | Tuesday 11 November 2025 00:50:36 +0000 (0:00:01.195) 0:07:08.382 ****** 2025-11-11 00:53:43.163001 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.163006 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.163010 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.163015 | orchestrator | 2025-11-11 00:53:43.163019 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-11-11 00:53:43.163024 | orchestrator | Tuesday 11 November 2025 00:50:37 +0000 (0:00:01.135) 0:07:09.518 ****** 2025-11-11 00:53:43.163028 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.163037 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.163042 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.163046 | orchestrator | 2025-11-11 00:53:43.163050 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-11-11 00:53:43.163055 | orchestrator | Tuesday 11 November 2025 00:50:39 +0000 (0:00:01.928) 0:07:11.446 ****** 2025-11-11 00:53:43.163059 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163064 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163068 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163073 | orchestrator | 2025-11-11 00:53:43.163077 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-11-11 00:53:43.163082 | orchestrator | Tuesday 11 November 2025 00:50:39 +0000 (0:00:00.301) 0:07:11.748 ****** 2025-11-11 00:53:43.163086 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163091 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163095 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163100 | orchestrator | 2025-11-11 00:53:43.163104 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-11-11 00:53:43.163109 | orchestrator | Tuesday 11 November 2025 00:50:39 +0000 (0:00:00.302) 0:07:12.050 ****** 2025-11-11 00:53:43.163113 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-11 00:53:43.163118 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-11-11 00:53:43.163122 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-11-11 00:53:43.163127 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-11-11 00:53:43.163131 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-11-11 00:53:43.163136 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-11-11 00:53:43.163140 | orchestrator | 2025-11-11 00:53:43.163145 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-11-11 00:53:43.163149 | orchestrator | Tuesday 11 November 2025 00:50:40 +0000 (0:00:00.957) 0:07:13.008 ****** 2025-11-11 00:53:43.163154 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-11 00:53:43.163158 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-11-11 00:53:43.163163 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-11 00:53:43.163167 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-11-11 00:53:43.163172 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-11 00:53:43.163176 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-11 00:53:43.163181 | orchestrator | 2025-11-11 00:53:43.163189 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-11-11 00:53:43.163194 | orchestrator | Tuesday 11 November 2025 00:50:43 +0000 (0:00:02.469) 0:07:15.477 ****** 2025-11-11 00:53:43.163198 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-11 00:53:43.163203 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-11-11 00:53:43.163207 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-11 00:53:43.163212 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-11-11 00:53:43.163216 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-11 00:53:43.163221 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-11 00:53:43.163225 | orchestrator | 2025-11-11 00:53:43.163230 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-11-11 00:53:43.163234 | orchestrator | Tuesday 11 November 2025 00:50:47 +0000 (0:00:03.675) 0:07:19.153 ****** 2025-11-11 00:53:43.163239 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163243 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163248 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.163252 | orchestrator | 2025-11-11 00:53:43.163256 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-11-11 00:53:43.163261 | orchestrator | Tuesday 11 November 2025 00:50:49 +0000 (0:00:02.226) 0:07:21.380 ****** 2025-11-11 00:53:43.163265 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163270 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163274 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-11-11 00:53:43.163283 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.163287 | orchestrator | 2025-11-11 00:53:43.163292 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-11-11 00:53:43.163296 | orchestrator | Tuesday 11 November 2025 00:51:01 +0000 (0:00:12.643) 0:07:34.023 ****** 2025-11-11 00:53:43.163301 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163305 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163310 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163314 | orchestrator | 2025-11-11 00:53:43.163319 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.163323 | orchestrator | Tuesday 11 November 2025 00:51:03 +0000 (0:00:01.111) 0:07:35.135 ****** 2025-11-11 00:53:43.163328 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163332 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163336 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163341 | orchestrator | 2025-11-11 00:53:43.163348 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-11 00:53:43.163353 | orchestrator | Tuesday 11 November 2025 00:51:03 +0000 (0:00:00.298) 0:07:35.433 ****** 2025-11-11 00:53:43.163357 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.163362 | orchestrator | 2025-11-11 00:53:43.163366 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-11 00:53:43.163371 | orchestrator | Tuesday 11 November 2025 00:51:04 +0000 (0:00:00.719) 0:07:36.152 ****** 2025-11-11 00:53:43.163375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.163380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.163384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.163389 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163406 | orchestrator | 2025-11-11 00:53:43.163411 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-11 00:53:43.163415 | orchestrator | Tuesday 11 November 2025 00:51:04 +0000 (0:00:00.354) 0:07:36.507 ****** 2025-11-11 00:53:43.163420 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163425 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163429 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163433 | orchestrator | 2025-11-11 00:53:43.163438 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-11 00:53:43.163443 | orchestrator | Tuesday 11 November 2025 00:51:04 +0000 (0:00:00.272) 0:07:36.780 ****** 2025-11-11 00:53:43.163447 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163452 | orchestrator | 2025-11-11 00:53:43.163456 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-11 00:53:43.163461 | orchestrator | Tuesday 11 November 2025 00:51:04 +0000 (0:00:00.204) 0:07:36.985 ****** 2025-11-11 00:53:43.163465 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163470 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163474 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163479 | orchestrator | 2025-11-11 00:53:43.163483 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-11 00:53:43.163488 | orchestrator | Tuesday 11 November 2025 00:51:05 +0000 (0:00:00.522) 0:07:37.508 ****** 2025-11-11 00:53:43.163492 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163496 | orchestrator | 2025-11-11 00:53:43.163501 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-11 00:53:43.163505 | orchestrator | Tuesday 11 November 2025 00:51:05 +0000 (0:00:00.222) 0:07:37.730 ****** 2025-11-11 00:53:43.163510 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163514 | orchestrator | 2025-11-11 00:53:43.163519 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-11 00:53:43.163523 | orchestrator | Tuesday 11 November 2025 00:51:05 +0000 (0:00:00.214) 0:07:37.944 ****** 2025-11-11 00:53:43.163532 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163536 | orchestrator | 2025-11-11 00:53:43.163541 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-11 00:53:43.163545 | orchestrator | Tuesday 11 November 2025 00:51:05 +0000 (0:00:00.133) 0:07:38.078 ****** 2025-11-11 00:53:43.163550 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163554 | orchestrator | 2025-11-11 00:53:43.163559 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-11 00:53:43.163563 | orchestrator | Tuesday 11 November 2025 00:51:06 +0000 (0:00:00.211) 0:07:38.289 ****** 2025-11-11 00:53:43.163571 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163576 | orchestrator | 2025-11-11 00:53:43.163580 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-11 00:53:43.163585 | orchestrator | Tuesday 11 November 2025 00:51:06 +0000 (0:00:00.209) 0:07:38.499 ****** 2025-11-11 00:53:43.163590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.163594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.163599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.163603 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163608 | orchestrator | 2025-11-11 00:53:43.163612 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-11 00:53:43.163617 | orchestrator | Tuesday 11 November 2025 00:51:06 +0000 (0:00:00.360) 0:07:38.860 ****** 2025-11-11 00:53:43.163621 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163626 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163630 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163635 | orchestrator | 2025-11-11 00:53:43.163639 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-11 00:53:43.163644 | orchestrator | Tuesday 11 November 2025 00:51:07 +0000 (0:00:00.312) 0:07:39.173 ****** 2025-11-11 00:53:43.163648 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163653 | orchestrator | 2025-11-11 00:53:43.163657 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-11 00:53:43.163662 | orchestrator | Tuesday 11 November 2025 00:51:07 +0000 (0:00:00.211) 0:07:39.384 ****** 2025-11-11 00:53:43.163666 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163671 | orchestrator | 2025-11-11 00:53:43.163675 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-11-11 00:53:43.163680 | orchestrator | 2025-11-11 00:53:43.163684 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.163689 | orchestrator | Tuesday 11 November 2025 00:51:08 +0000 (0:00:01.131) 0:07:40.516 ****** 2025-11-11 00:53:43.163694 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.163699 | orchestrator | 2025-11-11 00:53:43.163704 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.163708 | orchestrator | Tuesday 11 November 2025 00:51:09 +0000 (0:00:01.106) 0:07:41.623 ****** 2025-11-11 00:53:43.163715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.163720 | orchestrator | 2025-11-11 00:53:43.163725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.163729 | orchestrator | Tuesday 11 November 2025 00:51:10 +0000 (0:00:01.158) 0:07:42.782 ****** 2025-11-11 00:53:43.163734 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163738 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163743 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163747 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.163752 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.163757 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.163765 | orchestrator | 2025-11-11 00:53:43.163769 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.163774 | orchestrator | Tuesday 11 November 2025 00:51:11 +0000 (0:00:01.054) 0:07:43.837 ****** 2025-11-11 00:53:43.163778 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.163783 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.163787 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.163792 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.163796 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.163801 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.163805 | orchestrator | 2025-11-11 00:53:43.163810 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.163814 | orchestrator | Tuesday 11 November 2025 00:51:12 +0000 (0:00:00.970) 0:07:44.807 ****** 2025-11-11 00:53:43.163819 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.163823 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.163828 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.163832 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.163837 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.163841 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.163846 | orchestrator | 2025-11-11 00:53:43.163850 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.163855 | orchestrator | Tuesday 11 November 2025 00:51:13 +0000 (0:00:00.686) 0:07:45.494 ****** 2025-11-11 00:53:43.163859 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.163864 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.163868 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.163873 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.163877 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.163882 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.163886 | orchestrator | 2025-11-11 00:53:43.163891 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.163895 | orchestrator | Tuesday 11 November 2025 00:51:14 +0000 (0:00:00.945) 0:07:46.439 ****** 2025-11-11 00:53:43.163900 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163904 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163909 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163913 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.163917 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.163922 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.163927 | orchestrator | 2025-11-11 00:53:43.163931 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.163936 | orchestrator | Tuesday 11 November 2025 00:51:15 +0000 (0:00:00.972) 0:07:47.412 ****** 2025-11-11 00:53:43.163940 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163944 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163949 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163953 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.163958 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.163966 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.163971 | orchestrator | 2025-11-11 00:53:43.163975 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.163980 | orchestrator | Tuesday 11 November 2025 00:51:16 +0000 (0:00:00.797) 0:07:48.209 ****** 2025-11-11 00:53:43.163984 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.163989 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.163993 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.163998 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164002 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164007 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164011 | orchestrator | 2025-11-11 00:53:43.164016 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.164020 | orchestrator | Tuesday 11 November 2025 00:51:16 +0000 (0:00:00.567) 0:07:48.777 ****** 2025-11-11 00:53:43.164031 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164035 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164040 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164044 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164049 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164053 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164058 | orchestrator | 2025-11-11 00:53:43.164062 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.164067 | orchestrator | Tuesday 11 November 2025 00:51:17 +0000 (0:00:01.263) 0:07:50.040 ****** 2025-11-11 00:53:43.164071 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164076 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164080 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164085 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164089 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164094 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164098 | orchestrator | 2025-11-11 00:53:43.164103 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.164107 | orchestrator | Tuesday 11 November 2025 00:51:18 +0000 (0:00:00.973) 0:07:51.013 ****** 2025-11-11 00:53:43.164112 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164116 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164121 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164125 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164130 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164134 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164139 | orchestrator | 2025-11-11 00:53:43.164143 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.164148 | orchestrator | Tuesday 11 November 2025 00:51:19 +0000 (0:00:00.803) 0:07:51.817 ****** 2025-11-11 00:53:43.164155 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164160 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164165 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164169 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164173 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164178 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164182 | orchestrator | 2025-11-11 00:53:43.164187 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.164192 | orchestrator | Tuesday 11 November 2025 00:51:20 +0000 (0:00:00.600) 0:07:52.417 ****** 2025-11-11 00:53:43.164196 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164201 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164205 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164210 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164214 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164219 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164223 | orchestrator | 2025-11-11 00:53:43.164228 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.164232 | orchestrator | Tuesday 11 November 2025 00:51:21 +0000 (0:00:00.789) 0:07:53.207 ****** 2025-11-11 00:53:43.164237 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164241 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164246 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164250 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164255 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164259 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164264 | orchestrator | 2025-11-11 00:53:43.164268 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.164273 | orchestrator | Tuesday 11 November 2025 00:51:21 +0000 (0:00:00.568) 0:07:53.775 ****** 2025-11-11 00:53:43.164278 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164282 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164287 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164291 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164296 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164305 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164310 | orchestrator | 2025-11-11 00:53:43.164314 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.164319 | orchestrator | Tuesday 11 November 2025 00:51:22 +0000 (0:00:00.767) 0:07:54.542 ****** 2025-11-11 00:53:43.164323 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164328 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164332 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164337 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164341 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164346 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164350 | orchestrator | 2025-11-11 00:53:43.164355 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.164359 | orchestrator | Tuesday 11 November 2025 00:51:23 +0000 (0:00:00.570) 0:07:55.113 ****** 2025-11-11 00:53:43.164364 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164368 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164373 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164377 | orchestrator | skipping: [testbed-node-0] 2025-11-11 00:53:43.164382 | orchestrator | skipping: [testbed-node-1] 2025-11-11 00:53:43.164386 | orchestrator | skipping: [testbed-node-2] 2025-11-11 00:53:43.164400 | orchestrator | 2025-11-11 00:53:43.164405 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.164410 | orchestrator | Tuesday 11 November 2025 00:51:23 +0000 (0:00:00.753) 0:07:55.867 ****** 2025-11-11 00:53:43.164414 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164419 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164423 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164431 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164436 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164440 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164445 | orchestrator | 2025-11-11 00:53:43.164449 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.164454 | orchestrator | Tuesday 11 November 2025 00:51:24 +0000 (0:00:00.585) 0:07:56.452 ****** 2025-11-11 00:53:43.164458 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164463 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164467 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164472 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164476 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164480 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164485 | orchestrator | 2025-11-11 00:53:43.164489 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.164494 | orchestrator | Tuesday 11 November 2025 00:51:25 +0000 (0:00:00.751) 0:07:57.204 ****** 2025-11-11 00:53:43.164498 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164503 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164507 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164512 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164516 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164521 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164525 | orchestrator | 2025-11-11 00:53:43.164530 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-11-11 00:53:43.164534 | orchestrator | Tuesday 11 November 2025 00:51:26 +0000 (0:00:01.153) 0:07:58.357 ****** 2025-11-11 00:53:43.164539 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.164543 | orchestrator | 2025-11-11 00:53:43.164548 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-11-11 00:53:43.164552 | orchestrator | Tuesday 11 November 2025 00:51:30 +0000 (0:00:03.906) 0:08:02.264 ****** 2025-11-11 00:53:43.164557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.164561 | orchestrator | 2025-11-11 00:53:43.164566 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-11-11 00:53:43.164574 | orchestrator | Tuesday 11 November 2025 00:51:32 +0000 (0:00:01.929) 0:08:04.193 ****** 2025-11-11 00:53:43.164578 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.164583 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.164587 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.164592 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164596 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.164601 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.164605 | orchestrator | 2025-11-11 00:53:43.164612 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-11-11 00:53:43.164617 | orchestrator | Tuesday 11 November 2025 00:51:33 +0000 (0:00:01.433) 0:08:05.627 ****** 2025-11-11 00:53:43.164621 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.164626 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.164630 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.164635 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.164639 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.164644 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.164648 | orchestrator | 2025-11-11 00:53:43.164653 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-11-11 00:53:43.164657 | orchestrator | Tuesday 11 November 2025 00:51:34 +0000 (0:00:01.151) 0:08:06.778 ****** 2025-11-11 00:53:43.164662 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.164668 | orchestrator | 2025-11-11 00:53:43.164672 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-11-11 00:53:43.164677 | orchestrator | Tuesday 11 November 2025 00:51:35 +0000 (0:00:01.171) 0:08:07.950 ****** 2025-11-11 00:53:43.164681 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.164686 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.164690 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.164695 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.164699 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.164704 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.164708 | orchestrator | 2025-11-11 00:53:43.164713 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-11-11 00:53:43.164717 | orchestrator | Tuesday 11 November 2025 00:51:37 +0000 (0:00:01.447) 0:08:09.397 ****** 2025-11-11 00:53:43.164721 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.164726 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.164730 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.164735 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.164739 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.164743 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.164748 | orchestrator | 2025-11-11 00:53:43.164752 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-11-11 00:53:43.164757 | orchestrator | Tuesday 11 November 2025 00:51:40 +0000 (0:00:03.355) 0:08:12.752 ****** 2025-11-11 00:53:43.164762 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-11 00:53:43.164766 | orchestrator | 2025-11-11 00:53:43.164771 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-11-11 00:53:43.164775 | orchestrator | Tuesday 11 November 2025 00:51:41 +0000 (0:00:01.195) 0:08:13.948 ****** 2025-11-11 00:53:43.164780 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164784 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164789 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164793 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164798 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164802 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164806 | orchestrator | 2025-11-11 00:53:43.164811 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-11-11 00:53:43.164821 | orchestrator | Tuesday 11 November 2025 00:51:42 +0000 (0:00:00.620) 0:08:14.568 ****** 2025-11-11 00:53:43.164826 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.164830 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.164835 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.164843 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:53:43.164847 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:53:43.164852 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:53:43.164856 | orchestrator | 2025-11-11 00:53:43.164861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-11-11 00:53:43.164865 | orchestrator | Tuesday 11 November 2025 00:51:44 +0000 (0:00:02.067) 0:08:16.636 ****** 2025-11-11 00:53:43.164870 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164874 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164878 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164883 | orchestrator | ok: [testbed-node-0] 2025-11-11 00:53:43.164887 | orchestrator | ok: [testbed-node-1] 2025-11-11 00:53:43.164892 | orchestrator | ok: [testbed-node-2] 2025-11-11 00:53:43.164896 | orchestrator | 2025-11-11 00:53:43.164901 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-11-11 00:53:43.164905 | orchestrator | 2025-11-11 00:53:43.164910 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.164914 | orchestrator | Tuesday 11 November 2025 00:51:45 +0000 (0:00:01.044) 0:08:17.680 ****** 2025-11-11 00:53:43.164919 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.164923 | orchestrator | 2025-11-11 00:53:43.164928 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.164932 | orchestrator | Tuesday 11 November 2025 00:51:46 +0000 (0:00:00.695) 0:08:18.376 ****** 2025-11-11 00:53:43.164937 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.164941 | orchestrator | 2025-11-11 00:53:43.164945 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.164950 | orchestrator | Tuesday 11 November 2025 00:51:46 +0000 (0:00:00.508) 0:08:18.884 ****** 2025-11-11 00:53:43.164954 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.164959 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.164963 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.164968 | orchestrator | 2025-11-11 00:53:43.164972 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.164977 | orchestrator | Tuesday 11 November 2025 00:51:47 +0000 (0:00:00.295) 0:08:19.179 ****** 2025-11-11 00:53:43.164981 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.164986 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.164990 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.164994 | orchestrator | 2025-11-11 00:53:43.165002 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.165007 | orchestrator | Tuesday 11 November 2025 00:51:48 +0000 (0:00:00.937) 0:08:20.116 ****** 2025-11-11 00:53:43.165011 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165015 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165020 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165024 | orchestrator | 2025-11-11 00:53:43.165029 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.165033 | orchestrator | Tuesday 11 November 2025 00:51:48 +0000 (0:00:00.757) 0:08:20.874 ****** 2025-11-11 00:53:43.165038 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165042 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165046 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165051 | orchestrator | 2025-11-11 00:53:43.165055 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.165060 | orchestrator | Tuesday 11 November 2025 00:51:49 +0000 (0:00:00.686) 0:08:21.561 ****** 2025-11-11 00:53:43.165069 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165074 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165078 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165083 | orchestrator | 2025-11-11 00:53:43.165087 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.165092 | orchestrator | Tuesday 11 November 2025 00:51:49 +0000 (0:00:00.314) 0:08:21.875 ****** 2025-11-11 00:53:43.165096 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165101 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165105 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165110 | orchestrator | 2025-11-11 00:53:43.165114 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.165119 | orchestrator | Tuesday 11 November 2025 00:51:50 +0000 (0:00:00.536) 0:08:22.412 ****** 2025-11-11 00:53:43.165123 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165127 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165132 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165136 | orchestrator | 2025-11-11 00:53:43.165141 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.165145 | orchestrator | Tuesday 11 November 2025 00:51:50 +0000 (0:00:00.286) 0:08:22.699 ****** 2025-11-11 00:53:43.165150 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165154 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165159 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165163 | orchestrator | 2025-11-11 00:53:43.165168 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.165172 | orchestrator | Tuesday 11 November 2025 00:51:51 +0000 (0:00:00.727) 0:08:23.426 ****** 2025-11-11 00:53:43.165177 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165181 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165186 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165190 | orchestrator | 2025-11-11 00:53:43.165195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.165199 | orchestrator | Tuesday 11 November 2025 00:51:52 +0000 (0:00:00.721) 0:08:24.147 ****** 2025-11-11 00:53:43.165204 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165208 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165213 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165217 | orchestrator | 2025-11-11 00:53:43.165222 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.165226 | orchestrator | Tuesday 11 November 2025 00:51:52 +0000 (0:00:00.535) 0:08:24.683 ****** 2025-11-11 00:53:43.165231 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165235 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165239 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165244 | orchestrator | 2025-11-11 00:53:43.165251 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.165256 | orchestrator | Tuesday 11 November 2025 00:51:52 +0000 (0:00:00.278) 0:08:24.962 ****** 2025-11-11 00:53:43.165261 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165265 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165270 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165274 | orchestrator | 2025-11-11 00:53:43.165279 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.165283 | orchestrator | Tuesday 11 November 2025 00:51:53 +0000 (0:00:00.327) 0:08:25.289 ****** 2025-11-11 00:53:43.165288 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165292 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165297 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165301 | orchestrator | 2025-11-11 00:53:43.165306 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.165310 | orchestrator | Tuesday 11 November 2025 00:51:53 +0000 (0:00:00.310) 0:08:25.599 ****** 2025-11-11 00:53:43.165315 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165319 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165327 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165332 | orchestrator | 2025-11-11 00:53:43.165336 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.165341 | orchestrator | Tuesday 11 November 2025 00:51:54 +0000 (0:00:00.531) 0:08:26.131 ****** 2025-11-11 00:53:43.165345 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165350 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165354 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165359 | orchestrator | 2025-11-11 00:53:43.165363 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.165368 | orchestrator | Tuesday 11 November 2025 00:51:54 +0000 (0:00:00.288) 0:08:26.420 ****** 2025-11-11 00:53:43.165372 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165377 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165381 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165385 | orchestrator | 2025-11-11 00:53:43.165413 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.165419 | orchestrator | Tuesday 11 November 2025 00:51:54 +0000 (0:00:00.318) 0:08:26.739 ****** 2025-11-11 00:53:43.165423 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165428 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165432 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165437 | orchestrator | 2025-11-11 00:53:43.165441 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.165449 | orchestrator | Tuesday 11 November 2025 00:51:54 +0000 (0:00:00.299) 0:08:27.039 ****** 2025-11-11 00:53:43.165454 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165458 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165462 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165467 | orchestrator | 2025-11-11 00:53:43.165472 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.165476 | orchestrator | Tuesday 11 November 2025 00:51:55 +0000 (0:00:00.576) 0:08:27.615 ****** 2025-11-11 00:53:43.165481 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165485 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.165490 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.165494 | orchestrator | 2025-11-11 00:53:43.165499 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-11-11 00:53:43.165503 | orchestrator | Tuesday 11 November 2025 00:51:56 +0000 (0:00:00.523) 0:08:28.138 ****** 2025-11-11 00:53:43.165508 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165512 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165517 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-11-11 00:53:43.165521 | orchestrator | 2025-11-11 00:53:43.165526 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-11-11 00:53:43.165530 | orchestrator | Tuesday 11 November 2025 00:51:56 +0000 (0:00:00.376) 0:08:28.515 ****** 2025-11-11 00:53:43.165535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.165539 | orchestrator | 2025-11-11 00:53:43.165544 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-11-11 00:53:43.165548 | orchestrator | Tuesday 11 November 2025 00:51:58 +0000 (0:00:02.501) 0:08:31.016 ****** 2025-11-11 00:53:43.165554 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-11-11 00:53:43.165561 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165565 | orchestrator | 2025-11-11 00:53:43.165570 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-11-11 00:53:43.165575 | orchestrator | Tuesday 11 November 2025 00:51:59 +0000 (0:00:00.216) 0:08:31.233 ****** 2025-11-11 00:53:43.165581 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:53:43.165596 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:53:43.165601 | orchestrator | 2025-11-11 00:53:43.165605 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-11-11 00:53:43.165610 | orchestrator | Tuesday 11 November 2025 00:52:06 +0000 (0:00:07.500) 0:08:38.734 ****** 2025-11-11 00:53:43.165614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-11 00:53:43.165619 | orchestrator | 2025-11-11 00:53:43.165626 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-11-11 00:53:43.165630 | orchestrator | Tuesday 11 November 2025 00:52:10 +0000 (0:00:03.530) 0:08:42.264 ****** 2025-11-11 00:53:43.165634 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.165638 | orchestrator | 2025-11-11 00:53:43.165643 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-11-11 00:53:43.165647 | orchestrator | Tuesday 11 November 2025 00:52:10 +0000 (0:00:00.528) 0:08:42.792 ****** 2025-11-11 00:53:43.165651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-11 00:53:43.165655 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-11 00:53:43.165659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-11-11 00:53:43.165663 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-11-11 00:53:43.165667 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-11 00:53:43.165671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-11-11 00:53:43.165675 | orchestrator | 2025-11-11 00:53:43.165679 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-11-11 00:53:43.165683 | orchestrator | Tuesday 11 November 2025 00:52:11 +0000 (0:00:01.292) 0:08:44.084 ****** 2025-11-11 00:53:43.165687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.165692 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.165696 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.165700 | orchestrator | 2025-11-11 00:53:43.165704 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-11-11 00:53:43.165708 | orchestrator | Tuesday 11 November 2025 00:52:14 +0000 (0:00:02.036) 0:08:46.121 ****** 2025-11-11 00:53:43.165712 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-11 00:53:43.165716 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.165720 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165724 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-11 00:53:43.165728 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-11 00:53:43.165732 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165739 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-11 00:53:43.165744 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-11 00:53:43.165748 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165752 | orchestrator | 2025-11-11 00:53:43.165756 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-11-11 00:53:43.165760 | orchestrator | Tuesday 11 November 2025 00:52:15 +0000 (0:00:01.160) 0:08:47.281 ****** 2025-11-11 00:53:43.165764 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165768 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165772 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165776 | orchestrator | 2025-11-11 00:53:43.165785 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-11-11 00:53:43.165789 | orchestrator | Tuesday 11 November 2025 00:52:18 +0000 (0:00:02.838) 0:08:50.120 ****** 2025-11-11 00:53:43.165793 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.165797 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.165801 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.165805 | orchestrator | 2025-11-11 00:53:43.165809 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-11-11 00:53:43.165813 | orchestrator | Tuesday 11 November 2025 00:52:18 +0000 (0:00:00.314) 0:08:50.434 ****** 2025-11-11 00:53:43.165817 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.165822 | orchestrator | 2025-11-11 00:53:43.165826 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-11-11 00:53:43.165830 | orchestrator | Tuesday 11 November 2025 00:52:19 +0000 (0:00:00.900) 0:08:51.335 ****** 2025-11-11 00:53:43.165834 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.165838 | orchestrator | 2025-11-11 00:53:43.165842 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-11-11 00:53:43.165846 | orchestrator | Tuesday 11 November 2025 00:52:19 +0000 (0:00:00.513) 0:08:51.848 ****** 2025-11-11 00:53:43.165850 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165854 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165859 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165863 | orchestrator | 2025-11-11 00:53:43.165868 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-11-11 00:53:43.165874 | orchestrator | Tuesday 11 November 2025 00:52:21 +0000 (0:00:01.625) 0:08:53.474 ****** 2025-11-11 00:53:43.165881 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165888 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165894 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165901 | orchestrator | 2025-11-11 00:53:43.165907 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-11-11 00:53:43.165914 | orchestrator | Tuesday 11 November 2025 00:52:22 +0000 (0:00:01.142) 0:08:54.616 ****** 2025-11-11 00:53:43.165920 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165927 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165934 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165941 | orchestrator | 2025-11-11 00:53:43.165947 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-11-11 00:53:43.165954 | orchestrator | Tuesday 11 November 2025 00:52:24 +0000 (0:00:01.804) 0:08:56.420 ****** 2025-11-11 00:53:43.165960 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.165967 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.165972 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.165976 | orchestrator | 2025-11-11 00:53:43.165984 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-11-11 00:53:43.165988 | orchestrator | Tuesday 11 November 2025 00:52:26 +0000 (0:00:01.886) 0:08:58.307 ****** 2025-11-11 00:53:43.165992 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.165996 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166000 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166004 | orchestrator | 2025-11-11 00:53:43.166008 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.166031 | orchestrator | Tuesday 11 November 2025 00:52:27 +0000 (0:00:01.490) 0:08:59.797 ****** 2025-11-11 00:53:43.166037 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.166041 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.166046 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.166050 | orchestrator | 2025-11-11 00:53:43.166054 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-11 00:53:43.166058 | orchestrator | Tuesday 11 November 2025 00:52:28 +0000 (0:00:00.639) 0:09:00.437 ****** 2025-11-11 00:53:43.166067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.166071 | orchestrator | 2025-11-11 00:53:43.166076 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-11 00:53:43.166080 | orchestrator | Tuesday 11 November 2025 00:52:29 +0000 (0:00:00.752) 0:09:01.189 ****** 2025-11-11 00:53:43.166084 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166088 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166092 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166096 | orchestrator | 2025-11-11 00:53:43.166100 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-11 00:53:43.166104 | orchestrator | Tuesday 11 November 2025 00:52:29 +0000 (0:00:00.310) 0:09:01.499 ****** 2025-11-11 00:53:43.166109 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.166113 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.166117 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.166121 | orchestrator | 2025-11-11 00:53:43.166125 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-11 00:53:43.166129 | orchestrator | Tuesday 11 November 2025 00:52:30 +0000 (0:00:01.226) 0:09:02.726 ****** 2025-11-11 00:53:43.166133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.166138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.166142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.166146 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166150 | orchestrator | 2025-11-11 00:53:43.166157 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-11 00:53:43.166162 | orchestrator | Tuesday 11 November 2025 00:52:31 +0000 (0:00:01.138) 0:09:03.864 ****** 2025-11-11 00:53:43.166166 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166170 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166174 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166178 | orchestrator | 2025-11-11 00:53:43.166182 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-11 00:53:43.166187 | orchestrator | 2025-11-11 00:53:43.166191 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-11 00:53:43.166195 | orchestrator | Tuesday 11 November 2025 00:52:32 +0000 (0:00:00.557) 0:09:04.422 ****** 2025-11-11 00:53:43.166199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.166203 | orchestrator | 2025-11-11 00:53:43.166207 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-11 00:53:43.166211 | orchestrator | Tuesday 11 November 2025 00:52:33 +0000 (0:00:00.690) 0:09:05.112 ****** 2025-11-11 00:53:43.166215 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.166220 | orchestrator | 2025-11-11 00:53:43.166224 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-11 00:53:43.166228 | orchestrator | Tuesday 11 November 2025 00:52:33 +0000 (0:00:00.504) 0:09:05.617 ****** 2025-11-11 00:53:43.166232 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166236 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166240 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166244 | orchestrator | 2025-11-11 00:53:43.166249 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-11 00:53:43.166253 | orchestrator | Tuesday 11 November 2025 00:52:33 +0000 (0:00:00.286) 0:09:05.903 ****** 2025-11-11 00:53:43.166257 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166261 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166265 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166269 | orchestrator | 2025-11-11 00:53:43.166273 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-11 00:53:43.166284 | orchestrator | Tuesday 11 November 2025 00:52:34 +0000 (0:00:00.901) 0:09:06.805 ****** 2025-11-11 00:53:43.166288 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166292 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166296 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166300 | orchestrator | 2025-11-11 00:53:43.166304 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-11 00:53:43.166309 | orchestrator | Tuesday 11 November 2025 00:52:35 +0000 (0:00:00.705) 0:09:07.510 ****** 2025-11-11 00:53:43.166313 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166317 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166321 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166325 | orchestrator | 2025-11-11 00:53:43.166329 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-11 00:53:43.166333 | orchestrator | Tuesday 11 November 2025 00:52:36 +0000 (0:00:00.721) 0:09:08.232 ****** 2025-11-11 00:53:43.166337 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166341 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166345 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166349 | orchestrator | 2025-11-11 00:53:43.166353 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-11 00:53:43.166361 | orchestrator | Tuesday 11 November 2025 00:52:36 +0000 (0:00:00.321) 0:09:08.554 ****** 2025-11-11 00:53:43.166365 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166369 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166373 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166377 | orchestrator | 2025-11-11 00:53:43.166381 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-11 00:53:43.166386 | orchestrator | Tuesday 11 November 2025 00:52:36 +0000 (0:00:00.529) 0:09:09.084 ****** 2025-11-11 00:53:43.166390 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166404 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166408 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166412 | orchestrator | 2025-11-11 00:53:43.166417 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-11 00:53:43.166421 | orchestrator | Tuesday 11 November 2025 00:52:37 +0000 (0:00:00.295) 0:09:09.379 ****** 2025-11-11 00:53:43.166425 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166429 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166433 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166437 | orchestrator | 2025-11-11 00:53:43.166441 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-11 00:53:43.166445 | orchestrator | Tuesday 11 November 2025 00:52:38 +0000 (0:00:00.756) 0:09:10.135 ****** 2025-11-11 00:53:43.166449 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166453 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166457 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166461 | orchestrator | 2025-11-11 00:53:43.166465 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-11 00:53:43.166469 | orchestrator | Tuesday 11 November 2025 00:52:38 +0000 (0:00:00.686) 0:09:10.822 ****** 2025-11-11 00:53:43.166474 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166478 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166482 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166486 | orchestrator | 2025-11-11 00:53:43.166490 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-11 00:53:43.166494 | orchestrator | Tuesday 11 November 2025 00:52:39 +0000 (0:00:00.515) 0:09:11.338 ****** 2025-11-11 00:53:43.166498 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166502 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166506 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166510 | orchestrator | 2025-11-11 00:53:43.166514 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-11 00:53:43.166518 | orchestrator | Tuesday 11 November 2025 00:52:39 +0000 (0:00:00.294) 0:09:11.632 ****** 2025-11-11 00:53:43.166527 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166534 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166538 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166542 | orchestrator | 2025-11-11 00:53:43.166546 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-11 00:53:43.166550 | orchestrator | Tuesday 11 November 2025 00:52:39 +0000 (0:00:00.328) 0:09:11.960 ****** 2025-11-11 00:53:43.166554 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166558 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166562 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166566 | orchestrator | 2025-11-11 00:53:43.166570 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-11 00:53:43.166574 | orchestrator | Tuesday 11 November 2025 00:52:40 +0000 (0:00:00.324) 0:09:12.284 ****** 2025-11-11 00:53:43.166578 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166582 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166587 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166591 | orchestrator | 2025-11-11 00:53:43.166595 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-11 00:53:43.166599 | orchestrator | Tuesday 11 November 2025 00:52:40 +0000 (0:00:00.571) 0:09:12.855 ****** 2025-11-11 00:53:43.166603 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166607 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166611 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166615 | orchestrator | 2025-11-11 00:53:43.166619 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-11 00:53:43.166623 | orchestrator | Tuesday 11 November 2025 00:52:41 +0000 (0:00:00.301) 0:09:13.157 ****** 2025-11-11 00:53:43.166627 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166631 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166635 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166639 | orchestrator | 2025-11-11 00:53:43.166644 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-11 00:53:43.166648 | orchestrator | Tuesday 11 November 2025 00:52:41 +0000 (0:00:00.307) 0:09:13.465 ****** 2025-11-11 00:53:43.166652 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166656 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166660 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166664 | orchestrator | 2025-11-11 00:53:43.166668 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-11 00:53:43.166672 | orchestrator | Tuesday 11 November 2025 00:52:41 +0000 (0:00:00.304) 0:09:13.769 ****** 2025-11-11 00:53:43.166676 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166680 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166684 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166688 | orchestrator | 2025-11-11 00:53:43.166692 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-11 00:53:43.166696 | orchestrator | Tuesday 11 November 2025 00:52:41 +0000 (0:00:00.318) 0:09:14.087 ****** 2025-11-11 00:53:43.166700 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.166704 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.166708 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.166712 | orchestrator | 2025-11-11 00:53:43.166716 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-11-11 00:53:43.166721 | orchestrator | Tuesday 11 November 2025 00:52:42 +0000 (0:00:00.862) 0:09:14.950 ****** 2025-11-11 00:53:43.166725 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.166729 | orchestrator | 2025-11-11 00:53:43.166733 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-11 00:53:43.166737 | orchestrator | Tuesday 11 November 2025 00:52:43 +0000 (0:00:00.501) 0:09:15.452 ****** 2025-11-11 00:53:43.166744 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166748 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.166756 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.166760 | orchestrator | 2025-11-11 00:53:43.166764 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-11 00:53:43.166768 | orchestrator | Tuesday 11 November 2025 00:52:45 +0000 (0:00:02.569) 0:09:18.022 ****** 2025-11-11 00:53:43.166772 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-11 00:53:43.166776 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-11 00:53:43.166781 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.166785 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-11 00:53:43.166789 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-11 00:53:43.166793 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.166797 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-11 00:53:43.166801 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-11 00:53:43.166805 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.166809 | orchestrator | 2025-11-11 00:53:43.166813 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-11-11 00:53:43.166817 | orchestrator | Tuesday 11 November 2025 00:52:47 +0000 (0:00:01.212) 0:09:19.235 ****** 2025-11-11 00:53:43.166821 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.166825 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.166829 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.166833 | orchestrator | 2025-11-11 00:53:43.166837 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-11-11 00:53:43.166841 | orchestrator | Tuesday 11 November 2025 00:52:47 +0000 (0:00:00.322) 0:09:19.558 ****** 2025-11-11 00:53:43.166846 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.166850 | orchestrator | 2025-11-11 00:53:43.166854 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-11-11 00:53:43.166858 | orchestrator | Tuesday 11 November 2025 00:52:48 +0000 (0:00:00.681) 0:09:20.239 ****** 2025-11-11 00:53:43.166862 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.166869 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.166874 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.166878 | orchestrator | 2025-11-11 00:53:43.166882 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-11-11 00:53:43.166886 | orchestrator | Tuesday 11 November 2025 00:52:48 +0000 (0:00:00.761) 0:09:21.001 ****** 2025-11-11 00:53:43.166890 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166894 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-11 00:53:43.166898 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166902 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166906 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-11 00:53:43.166911 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-11 00:53:43.166915 | orchestrator | 2025-11-11 00:53:43.166919 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-11 00:53:43.166923 | orchestrator | Tuesday 11 November 2025 00:52:53 +0000 (0:00:04.266) 0:09:25.267 ****** 2025-11-11 00:53:43.166932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166936 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.166941 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166945 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.166949 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:53:43.166953 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:53:43.166957 | orchestrator | 2025-11-11 00:53:43.166961 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-11 00:53:43.166965 | orchestrator | Tuesday 11 November 2025 00:52:55 +0000 (0:00:02.265) 0:09:27.532 ****** 2025-11-11 00:53:43.166969 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-11 00:53:43.166973 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.166977 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-11 00:53:43.166981 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.166985 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-11 00:53:43.166989 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.166993 | orchestrator | 2025-11-11 00:53:43.166997 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-11-11 00:53:43.167001 | orchestrator | Tuesday 11 November 2025 00:52:56 +0000 (0:00:01.385) 0:09:28.918 ****** 2025-11-11 00:53:43.167008 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-11-11 00:53:43.167013 | orchestrator | 2025-11-11 00:53:43.167017 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-11-11 00:53:43.167021 | orchestrator | Tuesday 11 November 2025 00:52:57 +0000 (0:00:00.236) 0:09:29.155 ****** 2025-11-11 00:53:43.167025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167046 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167050 | orchestrator | 2025-11-11 00:53:43.167054 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-11-11 00:53:43.167058 | orchestrator | Tuesday 11 November 2025 00:52:57 +0000 (0:00:00.595) 0:09:29.750 ****** 2025-11-11 00:53:43.167062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-11 00:53:43.167086 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167090 | orchestrator | 2025-11-11 00:53:43.167094 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-11-11 00:53:43.167102 | orchestrator | Tuesday 11 November 2025 00:52:58 +0000 (0:00:00.562) 0:09:30.312 ****** 2025-11-11 00:53:43.167107 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-11 00:53:43.167111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-11 00:53:43.167115 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-11 00:53:43.167119 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-11 00:53:43.167124 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-11 00:53:43.167128 | orchestrator | 2025-11-11 00:53:43.167132 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-11-11 00:53:43.167136 | orchestrator | Tuesday 11 November 2025 00:53:28 +0000 (0:00:30.054) 0:10:00.367 ****** 2025-11-11 00:53:43.167140 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167144 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.167148 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.167153 | orchestrator | 2025-11-11 00:53:43.167157 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-11-11 00:53:43.167161 | orchestrator | Tuesday 11 November 2025 00:53:28 +0000 (0:00:00.339) 0:10:00.706 ****** 2025-11-11 00:53:43.167165 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167169 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.167173 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.167177 | orchestrator | 2025-11-11 00:53:43.167181 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-11-11 00:53:43.167185 | orchestrator | Tuesday 11 November 2025 00:53:28 +0000 (0:00:00.313) 0:10:01.020 ****** 2025-11-11 00:53:43.167189 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.167194 | orchestrator | 2025-11-11 00:53:43.167198 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-11-11 00:53:43.167202 | orchestrator | Tuesday 11 November 2025 00:53:29 +0000 (0:00:00.771) 0:10:01.792 ****** 2025-11-11 00:53:43.167206 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.167210 | orchestrator | 2025-11-11 00:53:43.167216 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-11-11 00:53:43.167223 | orchestrator | Tuesday 11 November 2025 00:53:30 +0000 (0:00:00.504) 0:10:02.296 ****** 2025-11-11 00:53:43.167233 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.167239 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.167246 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.167253 | orchestrator | 2025-11-11 00:53:43.167260 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-11-11 00:53:43.167267 | orchestrator | Tuesday 11 November 2025 00:53:31 +0000 (0:00:01.533) 0:10:03.830 ****** 2025-11-11 00:53:43.167273 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.167280 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.167285 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.167289 | orchestrator | 2025-11-11 00:53:43.167293 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-11-11 00:53:43.167297 | orchestrator | Tuesday 11 November 2025 00:53:32 +0000 (0:00:01.235) 0:10:05.065 ****** 2025-11-11 00:53:43.167301 | orchestrator | changed: [testbed-node-3] 2025-11-11 00:53:43.167305 | orchestrator | changed: [testbed-node-4] 2025-11-11 00:53:43.167314 | orchestrator | changed: [testbed-node-5] 2025-11-11 00:53:43.167318 | orchestrator | 2025-11-11 00:53:43.167322 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-11-11 00:53:43.167326 | orchestrator | Tuesday 11 November 2025 00:53:34 +0000 (0:00:01.702) 0:10:06.768 ****** 2025-11-11 00:53:43.167330 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.167334 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.167338 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-11 00:53:43.167342 | orchestrator | 2025-11-11 00:53:43.167347 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-11 00:53:43.167351 | orchestrator | Tuesday 11 November 2025 00:53:37 +0000 (0:00:02.693) 0:10:09.461 ****** 2025-11-11 00:53:43.167355 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167359 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.167363 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.167367 | orchestrator | 2025-11-11 00:53:43.167371 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-11 00:53:43.167375 | orchestrator | Tuesday 11 November 2025 00:53:37 +0000 (0:00:00.339) 0:10:09.801 ****** 2025-11-11 00:53:43.167379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:53:43.167383 | orchestrator | 2025-11-11 00:53:43.167387 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-11 00:53:43.167424 | orchestrator | Tuesday 11 November 2025 00:53:38 +0000 (0:00:00.717) 0:10:10.518 ****** 2025-11-11 00:53:43.167429 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.167434 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.167438 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.167442 | orchestrator | 2025-11-11 00:53:43.167446 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-11 00:53:43.167450 | orchestrator | Tuesday 11 November 2025 00:53:38 +0000 (0:00:00.320) 0:10:10.839 ****** 2025-11-11 00:53:43.167454 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167458 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:53:43.167474 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:53:43.167478 | orchestrator | 2025-11-11 00:53:43.167482 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-11 00:53:43.167486 | orchestrator | Tuesday 11 November 2025 00:53:39 +0000 (0:00:00.364) 0:10:11.203 ****** 2025-11-11 00:53:43.167489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:53:43.167493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:53:43.167497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:53:43.167500 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:53:43.167504 | orchestrator | 2025-11-11 00:53:43.167508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-11 00:53:43.167511 | orchestrator | Tuesday 11 November 2025 00:53:39 +0000 (0:00:00.865) 0:10:12.069 ****** 2025-11-11 00:53:43.167515 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:53:43.167519 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:53:43.167523 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:53:43.167526 | orchestrator | 2025-11-11 00:53:43.167530 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:53:43.167534 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-11-11 00:53:43.167538 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-11-11 00:53:43.167546 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-11-11 00:53:43.167550 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-11-11 00:53:43.167554 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-11-11 00:53:43.167558 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-11-11 00:53:43.167561 | orchestrator | 2025-11-11 00:53:43.167565 | orchestrator | 2025-11-11 00:53:43.167569 | orchestrator | 2025-11-11 00:53:43.167575 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:53:43.167579 | orchestrator | Tuesday 11 November 2025 00:53:40 +0000 (0:00:00.227) 0:10:12.296 ****** 2025-11-11 00:53:43.167583 | orchestrator | =============================================================================== 2025-11-11 00:53:43.167586 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.41s 2025-11-11 00:53:43.167590 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.82s 2025-11-11 00:53:43.167594 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.05s 2025-11-11 00:53:43.167598 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.48s 2025-11-11 00:53:43.167601 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.89s 2025-11-11 00:53:43.167605 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.67s 2025-11-11 00:53:43.167609 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.64s 2025-11-11 00:53:43.167612 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.29s 2025-11-11 00:53:43.167616 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.03s 2025-11-11 00:53:43.167620 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.50s 2025-11-11 00:53:43.167624 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.43s 2025-11-11 00:53:43.167627 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.42s 2025-11-11 00:53:43.167631 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.54s 2025-11-11 00:53:43.167635 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2025-11-11 00:53:43.167638 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.91s 2025-11-11 00:53:43.167642 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.68s 2025-11-11 00:53:43.167646 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.59s 2025-11-11 00:53:43.167649 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.53s 2025-11-11 00:53:43.167656 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.36s 2025-11-11 00:53:43.167659 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.36s 2025-11-11 00:53:43.167663 | orchestrator | 2025-11-11 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:46.181075 | orchestrator | 2025-11-11 00:53:46 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:46.181203 | orchestrator | 2025-11-11 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:49.214280 | orchestrator | 2025-11-11 00:53:49 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:49.214769 | orchestrator | 2025-11-11 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:52.264163 | orchestrator | 2025-11-11 00:53:52 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:52.265149 | orchestrator | 2025-11-11 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:55.309210 | orchestrator | 2025-11-11 00:53:55 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:55.309307 | orchestrator | 2025-11-11 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:53:58.341561 | orchestrator | 2025-11-11 00:53:58 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:53:58.341671 | orchestrator | 2025-11-11 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:01.377258 | orchestrator | 2025-11-11 00:54:01 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:01.377370 | orchestrator | 2025-11-11 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:04.424982 | orchestrator | 2025-11-11 00:54:04 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:04.425112 | orchestrator | 2025-11-11 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:07.477371 | orchestrator | 2025-11-11 00:54:07 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:07.477515 | orchestrator | 2025-11-11 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:10.521279 | orchestrator | 2025-11-11 00:54:10 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:10.521390 | orchestrator | 2025-11-11 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:13.568924 | orchestrator | 2025-11-11 00:54:13 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:13.569007 | orchestrator | 2025-11-11 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:16.612736 | orchestrator | 2025-11-11 00:54:16 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:16.612846 | orchestrator | 2025-11-11 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:19.655516 | orchestrator | 2025-11-11 00:54:19 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:19.655667 | orchestrator | 2025-11-11 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:22.696164 | orchestrator | 2025-11-11 00:54:22 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:22.696254 | orchestrator | 2025-11-11 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:25.742127 | orchestrator | 2025-11-11 00:54:25 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:25.742263 | orchestrator | 2025-11-11 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:28.781685 | orchestrator | 2025-11-11 00:54:28 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:28.781801 | orchestrator | 2025-11-11 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:31.827544 | orchestrator | 2025-11-11 00:54:31 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:31.827660 | orchestrator | 2025-11-11 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:34.874968 | orchestrator | 2025-11-11 00:54:34 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:34.875080 | orchestrator | 2025-11-11 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:37.923115 | orchestrator | 2025-11-11 00:54:37 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:37.923289 | orchestrator | 2025-11-11 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:40.962853 | orchestrator | 2025-11-11 00:54:40 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:40.962938 | orchestrator | 2025-11-11 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:44.008981 | orchestrator | 2025-11-11 00:54:44 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:44.009157 | orchestrator | 2025-11-11 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:47.050745 | orchestrator | 2025-11-11 00:54:47 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:47.050837 | orchestrator | 2025-11-11 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:50.095530 | orchestrator | 2025-11-11 00:54:50 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:50.095642 | orchestrator | 2025-11-11 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:53.141231 | orchestrator | 2025-11-11 00:54:53 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:53.141316 | orchestrator | 2025-11-11 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:56.177807 | orchestrator | 2025-11-11 00:54:56 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:56.177949 | orchestrator | 2025-11-11 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:54:59.222978 | orchestrator | 2025-11-11 00:54:59 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:54:59.223101 | orchestrator | 2025-11-11 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:02.267380 | orchestrator | 2025-11-11 00:55:02 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:02.267559 | orchestrator | 2025-11-11 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:05.315324 | orchestrator | 2025-11-11 00:55:05 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:05.315524 | orchestrator | 2025-11-11 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:08.361807 | orchestrator | 2025-11-11 00:55:08 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:08.361942 | orchestrator | 2025-11-11 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:11.401168 | orchestrator | 2025-11-11 00:55:11 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:11.401288 | orchestrator | 2025-11-11 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:14.454499 | orchestrator | 2025-11-11 00:55:14 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:14.454620 | orchestrator | 2025-11-11 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:17.498368 | orchestrator | 2025-11-11 00:55:17 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:17.498513 | orchestrator | 2025-11-11 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:20.543395 | orchestrator | 2025-11-11 00:55:20 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:20.543553 | orchestrator | 2025-11-11 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:23.579718 | orchestrator | 2025-11-11 00:55:23 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:23.579895 | orchestrator | 2025-11-11 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:26.623378 | orchestrator | 2025-11-11 00:55:26 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:26.623548 | orchestrator | 2025-11-11 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:29.671219 | orchestrator | 2025-11-11 00:55:29 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:29.671339 | orchestrator | 2025-11-11 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:32.719471 | orchestrator | 2025-11-11 00:55:32 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:32.719573 | orchestrator | 2025-11-11 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:35.761366 | orchestrator | 2025-11-11 00:55:35 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:35.761555 | orchestrator | 2025-11-11 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:38.806789 | orchestrator | 2025-11-11 00:55:38 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:38.806906 | orchestrator | 2025-11-11 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:41.856942 | orchestrator | 2025-11-11 00:55:41 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:41.857060 | orchestrator | 2025-11-11 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:44.899314 | orchestrator | 2025-11-11 00:55:44 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:44.899427 | orchestrator | 2025-11-11 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:47.939428 | orchestrator | 2025-11-11 00:55:47 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state STARTED 2025-11-11 00:55:47.939597 | orchestrator | 2025-11-11 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:50.992276 | orchestrator | 2025-11-11 00:55:50.992409 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-11 00:55:50.992426 | orchestrator | 2.16.14 2025-11-11 00:55:50.992438 | orchestrator | 2025-11-11 00:55:50.992509 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-11-11 00:55:50.992523 | orchestrator | 2025-11-11 00:55:50.992533 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-11 00:55:50.992544 | orchestrator | Tuesday 11 November 2025 00:53:45 +0000 (0:00:00.455) 0:00:00.455 ****** 2025-11-11 00:55:50.992554 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:55:50.992565 | orchestrator | 2025-11-11 00:55:50.992575 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-11 00:55:50.992585 | orchestrator | Tuesday 11 November 2025 00:53:45 +0000 (0:00:00.519) 0:00:00.974 ****** 2025-11-11 00:55:50.992595 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992605 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992614 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992625 | orchestrator | 2025-11-11 00:55:50.992634 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-11 00:55:50.992644 | orchestrator | Tuesday 11 November 2025 00:53:46 +0000 (0:00:00.572) 0:00:01.546 ****** 2025-11-11 00:55:50.992654 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992663 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992673 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992683 | orchestrator | 2025-11-11 00:55:50.992692 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-11 00:55:50.992736 | orchestrator | Tuesday 11 November 2025 00:53:46 +0000 (0:00:00.256) 0:00:01.803 ****** 2025-11-11 00:55:50.992746 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992755 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992765 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992774 | orchestrator | 2025-11-11 00:55:50.992784 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-11 00:55:50.992793 | orchestrator | Tuesday 11 November 2025 00:53:47 +0000 (0:00:00.687) 0:00:02.490 ****** 2025-11-11 00:55:50.992803 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992812 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992822 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992831 | orchestrator | 2025-11-11 00:55:50.992841 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-11 00:55:50.992850 | orchestrator | Tuesday 11 November 2025 00:53:47 +0000 (0:00:00.253) 0:00:02.744 ****** 2025-11-11 00:55:50.992860 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992869 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992878 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992888 | orchestrator | 2025-11-11 00:55:50.992897 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-11 00:55:50.992907 | orchestrator | Tuesday 11 November 2025 00:53:47 +0000 (0:00:00.250) 0:00:02.994 ****** 2025-11-11 00:55:50.992916 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.992925 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.992935 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.992944 | orchestrator | 2025-11-11 00:55:50.992954 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-11 00:55:50.992964 | orchestrator | Tuesday 11 November 2025 00:53:47 +0000 (0:00:00.274) 0:00:03.269 ****** 2025-11-11 00:55:50.992974 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.992984 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.992994 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.993003 | orchestrator | 2025-11-11 00:55:50.993013 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-11 00:55:50.993022 | orchestrator | Tuesday 11 November 2025 00:53:48 +0000 (0:00:00.391) 0:00:03.660 ****** 2025-11-11 00:55:50.993032 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.993041 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.993051 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.993060 | orchestrator | 2025-11-11 00:55:50.993070 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-11 00:55:50.993079 | orchestrator | Tuesday 11 November 2025 00:53:48 +0000 (0:00:00.276) 0:00:03.936 ****** 2025-11-11 00:55:50.993089 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:55:50.993098 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:55:50.993107 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:55:50.993117 | orchestrator | 2025-11-11 00:55:50.993126 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-11 00:55:50.993136 | orchestrator | Tuesday 11 November 2025 00:53:49 +0000 (0:00:00.613) 0:00:04.549 ****** 2025-11-11 00:55:50.993145 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.993155 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.993182 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.993192 | orchestrator | 2025-11-11 00:55:50.993202 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-11 00:55:50.993211 | orchestrator | Tuesday 11 November 2025 00:53:49 +0000 (0:00:00.350) 0:00:04.899 ****** 2025-11-11 00:55:50.993220 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:55:50.993230 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:55:50.993246 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:55:50.993256 | orchestrator | 2025-11-11 00:55:50.993265 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-11 00:55:50.993280 | orchestrator | Tuesday 11 November 2025 00:53:51 +0000 (0:00:01.988) 0:00:06.888 ****** 2025-11-11 00:55:50.993290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-11 00:55:50.993300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-11 00:55:50.993310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-11 00:55:50.993320 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993329 | orchestrator | 2025-11-11 00:55:50.993357 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-11 00:55:50.993368 | orchestrator | Tuesday 11 November 2025 00:53:51 +0000 (0:00:00.497) 0:00:07.385 ****** 2025-11-11 00:55:50.993381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993414 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993423 | orchestrator | 2025-11-11 00:55:50.993433 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-11 00:55:50.993442 | orchestrator | Tuesday 11 November 2025 00:53:52 +0000 (0:00:00.659) 0:00:08.044 ****** 2025-11-11 00:55:50.993481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.993514 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993523 | orchestrator | 2025-11-11 00:55:50.993533 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-11 00:55:50.993543 | orchestrator | Tuesday 11 November 2025 00:53:52 +0000 (0:00:00.314) 0:00:08.358 ****** 2025-11-11 00:55:50.993561 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '58f8939cdaa3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-11 00:53:50.134099', 'end': '2025-11-11 00:53:50.178647', 'delta': '0:00:00.044548', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['58f8939cdaa3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-11-11 00:55:50.993585 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7453ea6c3b72', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-11 00:53:50.792687', 'end': '2025-11-11 00:53:50.843323', 'delta': '0:00:00.050636', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7453ea6c3b72'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-11-11 00:55:50.993604 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '57da9b73aea3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-11 00:53:51.301884', 'end': '2025-11-11 00:53:51.354529', 'delta': '0:00:00.052645', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['57da9b73aea3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-11-11 00:55:50.993615 | orchestrator | 2025-11-11 00:55:50.993625 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-11 00:55:50.993635 | orchestrator | Tuesday 11 November 2025 00:53:53 +0000 (0:00:00.181) 0:00:08.540 ****** 2025-11-11 00:55:50.993644 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.993654 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.993663 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.993673 | orchestrator | 2025-11-11 00:55:50.993682 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-11 00:55:50.993692 | orchestrator | Tuesday 11 November 2025 00:53:53 +0000 (0:00:00.426) 0:00:08.966 ****** 2025-11-11 00:55:50.993701 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-11-11 00:55:50.993711 | orchestrator | 2025-11-11 00:55:50.993720 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-11 00:55:50.993729 | orchestrator | Tuesday 11 November 2025 00:53:55 +0000 (0:00:01.644) 0:00:10.611 ****** 2025-11-11 00:55:50.993739 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993749 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.993758 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.993768 | orchestrator | 2025-11-11 00:55:50.993777 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-11 00:55:50.993786 | orchestrator | Tuesday 11 November 2025 00:53:55 +0000 (0:00:00.318) 0:00:10.929 ****** 2025-11-11 00:55:50.993796 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993805 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.993815 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.993824 | orchestrator | 2025-11-11 00:55:50.993833 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-11 00:55:50.993843 | orchestrator | Tuesday 11 November 2025 00:53:56 +0000 (0:00:00.493) 0:00:11.423 ****** 2025-11-11 00:55:50.993852 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993862 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.993877 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.993887 | orchestrator | 2025-11-11 00:55:50.993896 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-11 00:55:50.993906 | orchestrator | Tuesday 11 November 2025 00:53:56 +0000 (0:00:00.564) 0:00:11.987 ****** 2025-11-11 00:55:50.993915 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.993925 | orchestrator | 2025-11-11 00:55:50.993934 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-11 00:55:50.993944 | orchestrator | Tuesday 11 November 2025 00:53:56 +0000 (0:00:00.125) 0:00:12.112 ****** 2025-11-11 00:55:50.993953 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.993963 | orchestrator | 2025-11-11 00:55:50.993972 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-11 00:55:50.993981 | orchestrator | Tuesday 11 November 2025 00:53:56 +0000 (0:00:00.224) 0:00:12.337 ****** 2025-11-11 00:55:50.993991 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994000 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994010 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994091 | orchestrator | 2025-11-11 00:55:50.994102 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-11 00:55:50.994112 | orchestrator | Tuesday 11 November 2025 00:53:57 +0000 (0:00:00.311) 0:00:12.648 ****** 2025-11-11 00:55:50.994122 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994132 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994141 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994150 | orchestrator | 2025-11-11 00:55:50.994160 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-11 00:55:50.994170 | orchestrator | Tuesday 11 November 2025 00:53:57 +0000 (0:00:00.325) 0:00:12.974 ****** 2025-11-11 00:55:50.994179 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994189 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994203 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994213 | orchestrator | 2025-11-11 00:55:50.994223 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-11 00:55:50.994232 | orchestrator | Tuesday 11 November 2025 00:53:58 +0000 (0:00:00.494) 0:00:13.469 ****** 2025-11-11 00:55:50.994242 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994251 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994261 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994270 | orchestrator | 2025-11-11 00:55:50.994280 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-11 00:55:50.994290 | orchestrator | Tuesday 11 November 2025 00:53:58 +0000 (0:00:00.322) 0:00:13.791 ****** 2025-11-11 00:55:50.994299 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994309 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994318 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994328 | orchestrator | 2025-11-11 00:55:50.994337 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-11 00:55:50.994347 | orchestrator | Tuesday 11 November 2025 00:53:58 +0000 (0:00:00.295) 0:00:14.087 ****** 2025-11-11 00:55:50.994357 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994366 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994376 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994392 | orchestrator | 2025-11-11 00:55:50.994403 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-11 00:55:50.994412 | orchestrator | Tuesday 11 November 2025 00:53:58 +0000 (0:00:00.302) 0:00:14.389 ****** 2025-11-11 00:55:50.994422 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994431 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.994441 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.994519 | orchestrator | 2025-11-11 00:55:50.994531 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-11 00:55:50.994540 | orchestrator | Tuesday 11 November 2025 00:53:59 +0000 (0:00:00.470) 0:00:14.860 ****** 2025-11-11 00:55:50.994561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962', 'dm-uuid-LVM-S3eHSIHD1uB7sO1A8koWrLKT6fx6SNzYKx9W40acTdBKUd94RLezbMeDN5mN8Ppa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2', 'dm-uuid-LVM-w2QPVVCR86DwYlu6QkrJB3O0tNM0SE156HGTBXUZ41JMh0kCuzp1wpN1AfNOcRPH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHG2AP-RMJ2-XU8z-urBi-TjE9-JjnK-7sRCVo', 'scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882', 'scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8', 'dm-uuid-LVM-rmBHXKFLezqR10dP8H8U0r7XHP1E6d7zmL7pdLKpm9l3PbZSXHgncigSDt2qbhDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bitFob-cdYD-3rME-pWpf-d0Oe-tZrO-DmmTUg', 'scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d', 'scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf', 'dm-uuid-LVM-3K6zWV4stuwcIKNGbseHBnjnPBQejVH5dka1KYQmQi8xGrRJGL8kZuALLXlxS0jx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f', 'scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994892 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.994902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-o3BmgF-O8xS-vwKg-1Fio-AIbW-OsjX-fvHcQf', 'scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34', 'scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e', 'dm-uuid-LVM-vabDWv0fZujdkgKW70tGqRuYZFTGJ2DYEcNW99loAKZ0E3ZBfyz83GFwvhxd4o8Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.994981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgDw9r-q7oi-0h92-qUur-IBWW-GhWX-g3sn3E', 'scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3', 'scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.994991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6', 'dm-uuid-LVM-Nnz1FmMFX1o5YKqamCRJyumvXH3t2V0QCTNvmf9iEynTtPBkYcJamWNGQMCvfsTh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860', 'scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995060 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.995070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-11 00:55:50.995140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6ebA-L3W2-mfXJ-5sdZ-KmFZ-RNtR-ZRy3R1', 'scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e', 'scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DqlukF-ForK-kz1J-Gc1r-CrEx-haMu-1ZiUZB', 'scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7', 'scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c', 'scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-11 00:55:50.995200 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.995208 | orchestrator | 2025-11-11 00:55:50.995216 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-11 00:55:50.995224 | orchestrator | Tuesday 11 November 2025 00:53:59 +0000 (0:00:00.514) 0:00:15.375 ****** 2025-11-11 00:55:50.995234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962', 'dm-uuid-LVM-S3eHSIHD1uB7sO1A8koWrLKT6fx6SNzYKx9W40acTdBKUd94RLezbMeDN5mN8Ppa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2', 'dm-uuid-LVM-w2QPVVCR86DwYlu6QkrJB3O0tNM0SE156HGTBXUZ41JMh0kCuzp1wpN1AfNOcRPH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995344 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8', 'dm-uuid-LVM-rmBHXKFLezqR10dP8H8U0r7XHP1E6d7zmL7pdLKpm9l3PbZSXHgncigSDt2qbhDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f52fcc5-8f85-4748-8d8f-0da86b7c7013-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf', 'dm-uuid-LVM-3K6zWV4stuwcIKNGbseHBnjnPBQejVH5dka1KYQmQi8xGrRJGL8kZuALLXlxS0jx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--01811ce3--d07c--5516--bfbb--fba58f4d4962-osd--block--01811ce3--d07c--5516--bfbb--fba58f4d4962'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHG2AP-RMJ2-XU8z-urBi-TjE9-JjnK-7sRCVo', 'scsi-0QEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882', 'scsi-SQEMU_QEMU_HARDDISK_40873841-1866-4eee-bbb6-ab8fbb214882'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995412 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2-osd--block--d28d894f--b2f1--5cbd--bb27--7fcd31d1cec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bitFob-cdYD-3rME-pWpf-d0Oe-tZrO-DmmTUg', 'scsi-0QEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d', 'scsi-SQEMU_QEMU_HARDDISK_75ea1c13-08ac-4925-8283-d5e2f994ce5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f', 'scsi-SQEMU_QEMU_HARDDISK_89b8de45-7543-4421-bfde-713d4c35668f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995504 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.995513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d762de08-88e9-4a05-8401-0b276306fde5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8-osd--block--1efdad6c--d6bf--5a45--aa4b--bff5b179c7b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-o3BmgF-O8xS-vwKg-1Fio-AIbW-OsjX-fvHcQf', 'scsi-0QEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34', 'scsi-SQEMU_QEMU_HARDDISK_e779f17b-a915-42a5-9da7-11da2e062a34'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e', 'dm-uuid-LVM-vabDWv0fZujdkgKW70tGqRuYZFTGJ2DYEcNW99loAKZ0E3ZBfyz83GFwvhxd4o8Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1fda84b1--4127--5701--96e6--fb2774ba2cbf-osd--block--1fda84b1--4127--5701--96e6--fb2774ba2cbf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgDw9r-q7oi-0h92-qUur-IBWW-GhWX-g3sn3E', 'scsi-0QEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3', 'scsi-SQEMU_QEMU_HARDDISK_0178bab0-214e-4a1b-9430-5e2bb66f07d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6', 'dm-uuid-LVM-Nnz1FmMFX1o5YKqamCRJyumvXH3t2V0QCTNvmf9iEynTtPBkYcJamWNGQMCvfsTh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860', 'scsi-SQEMU_QEMU_HARDDISK_f9373fbe-39b8-4f8c-b928-1a6d36b5f860'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995700 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.995709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b5ddd48-c5c8-4302-a57f-63bca86c5d46-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--af11c135--cf10--5d68--b776--281fb5d39e8e-osd--block--af11c135--cf10--5d68--b776--281fb5d39e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6ebA-L3W2-mfXJ-5sdZ-KmFZ-RNtR-ZRy3R1', 'scsi-0QEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e', 'scsi-SQEMU_QEMU_HARDDISK_83daedb9-81f3-45a4-88c7-2785338cd97e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a1515626--32f0--5abe--9383--a4f06f352cf6-osd--block--a1515626--32f0--5abe--9383--a4f06f352cf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DqlukF-ForK-kz1J-Gc1r-CrEx-haMu-1ZiUZB', 'scsi-0QEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7', 'scsi-SQEMU_QEMU_HARDDISK_9b408528-4a47-4f88-ab85-e4a870a278b7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c', 'scsi-SQEMU_QEMU_HARDDISK_389e8dac-4c9f-40ba-96aa-7c861964ff1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-11 00:55:50.995841 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.995849 | orchestrator | 2025-11-11 00:55:50.995857 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-11 00:55:50.995865 | orchestrator | Tuesday 11 November 2025 00:54:00 +0000 (0:00:00.595) 0:00:15.970 ****** 2025-11-11 00:55:50.995873 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.995881 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.995889 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.995897 | orchestrator | 2025-11-11 00:55:50.995905 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-11 00:55:50.995912 | orchestrator | Tuesday 11 November 2025 00:54:01 +0000 (0:00:00.660) 0:00:16.631 ****** 2025-11-11 00:55:50.995920 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.995928 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.995936 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.995943 | orchestrator | 2025-11-11 00:55:50.995951 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-11 00:55:50.995959 | orchestrator | Tuesday 11 November 2025 00:54:01 +0000 (0:00:00.470) 0:00:17.101 ****** 2025-11-11 00:55:50.995967 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.995975 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.995989 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.995996 | orchestrator | 2025-11-11 00:55:50.996004 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-11 00:55:50.996012 | orchestrator | Tuesday 11 November 2025 00:54:02 +0000 (0:00:00.627) 0:00:17.729 ****** 2025-11-11 00:55:50.996020 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996028 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996036 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996043 | orchestrator | 2025-11-11 00:55:50.996051 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-11 00:55:50.996059 | orchestrator | Tuesday 11 November 2025 00:54:02 +0000 (0:00:00.299) 0:00:18.028 ****** 2025-11-11 00:55:50.996066 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996074 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996082 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996090 | orchestrator | 2025-11-11 00:55:50.996097 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-11 00:55:50.996105 | orchestrator | Tuesday 11 November 2025 00:54:03 +0000 (0:00:00.390) 0:00:18.419 ****** 2025-11-11 00:55:50.996113 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996121 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996128 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996136 | orchestrator | 2025-11-11 00:55:50.996144 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-11 00:55:50.996151 | orchestrator | Tuesday 11 November 2025 00:54:03 +0000 (0:00:00.496) 0:00:18.916 ****** 2025-11-11 00:55:50.996159 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-11 00:55:50.996168 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-11 00:55:50.996176 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-11 00:55:50.996183 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-11 00:55:50.996192 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-11 00:55:50.996199 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-11 00:55:50.996207 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-11 00:55:50.996215 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-11 00:55:50.996222 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-11 00:55:50.996237 | orchestrator | 2025-11-11 00:55:50.996250 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-11 00:55:50.996262 | orchestrator | Tuesday 11 November 2025 00:54:04 +0000 (0:00:00.825) 0:00:19.741 ****** 2025-11-11 00:55:50.996273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-11 00:55:50.996286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-11 00:55:50.996300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-11 00:55:50.996312 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-11 00:55:50.996333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-11 00:55:50.996346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-11 00:55:50.996354 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996361 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-11 00:55:50.996369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-11 00:55:50.996377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-11 00:55:50.996385 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996392 | orchestrator | 2025-11-11 00:55:50.996400 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-11 00:55:50.996408 | orchestrator | Tuesday 11 November 2025 00:54:04 +0000 (0:00:00.331) 0:00:20.073 ****** 2025-11-11 00:55:50.996416 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-11 00:55:50.996431 | orchestrator | 2025-11-11 00:55:50.996439 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-11 00:55:50.996468 | orchestrator | Tuesday 11 November 2025 00:54:05 +0000 (0:00:00.670) 0:00:20.743 ****** 2025-11-11 00:55:50.996483 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996491 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996499 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996507 | orchestrator | 2025-11-11 00:55:50.996515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-11 00:55:50.996523 | orchestrator | Tuesday 11 November 2025 00:54:05 +0000 (0:00:00.332) 0:00:21.076 ****** 2025-11-11 00:55:50.996533 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996547 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996563 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996583 | orchestrator | 2025-11-11 00:55:50.996596 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-11 00:55:50.996609 | orchestrator | Tuesday 11 November 2025 00:54:05 +0000 (0:00:00.306) 0:00:21.383 ****** 2025-11-11 00:55:50.996621 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996633 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.996645 | orchestrator | skipping: [testbed-node-5] 2025-11-11 00:55:50.996658 | orchestrator | 2025-11-11 00:55:50.996670 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-11 00:55:50.996684 | orchestrator | Tuesday 11 November 2025 00:54:06 +0000 (0:00:00.293) 0:00:21.677 ****** 2025-11-11 00:55:50.996696 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.996709 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.996720 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.996732 | orchestrator | 2025-11-11 00:55:50.996745 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-11 00:55:50.996757 | orchestrator | Tuesday 11 November 2025 00:54:06 +0000 (0:00:00.582) 0:00:22.260 ****** 2025-11-11 00:55:50.996771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:55:50.996784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:55:50.996798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:55:50.996810 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996818 | orchestrator | 2025-11-11 00:55:50.996826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-11 00:55:50.996834 | orchestrator | Tuesday 11 November 2025 00:54:07 +0000 (0:00:00.374) 0:00:22.635 ****** 2025-11-11 00:55:50.996842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:55:50.996850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:55:50.996858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:55:50.996866 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996874 | orchestrator | 2025-11-11 00:55:50.996882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-11 00:55:50.996890 | orchestrator | Tuesday 11 November 2025 00:54:07 +0000 (0:00:00.354) 0:00:22.989 ****** 2025-11-11 00:55:50.996898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-11 00:55:50.996906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-11 00:55:50.996913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-11 00:55:50.996921 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.996929 | orchestrator | 2025-11-11 00:55:50.996937 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-11 00:55:50.996944 | orchestrator | Tuesday 11 November 2025 00:54:07 +0000 (0:00:00.352) 0:00:23.341 ****** 2025-11-11 00:55:50.996952 | orchestrator | ok: [testbed-node-3] 2025-11-11 00:55:50.996960 | orchestrator | ok: [testbed-node-4] 2025-11-11 00:55:50.996968 | orchestrator | ok: [testbed-node-5] 2025-11-11 00:55:50.996987 | orchestrator | 2025-11-11 00:55:50.996995 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-11 00:55:50.997002 | orchestrator | Tuesday 11 November 2025 00:54:08 +0000 (0:00:00.306) 0:00:23.647 ****** 2025-11-11 00:55:50.997011 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-11 00:55:50.997018 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-11 00:55:50.997026 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-11 00:55:50.997034 | orchestrator | 2025-11-11 00:55:50.997042 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-11 00:55:50.997050 | orchestrator | Tuesday 11 November 2025 00:54:08 +0000 (0:00:00.501) 0:00:24.148 ****** 2025-11-11 00:55:50.997058 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:55:50.997066 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:55:50.997074 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:55:50.997082 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-11 00:55:50.997090 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-11 00:55:50.997107 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-11 00:55:50.997115 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-11 00:55:50.997123 | orchestrator | 2025-11-11 00:55:50.997132 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-11 00:55:50.997139 | orchestrator | Tuesday 11 November 2025 00:54:09 +0000 (0:00:00.933) 0:00:25.082 ****** 2025-11-11 00:55:50.997147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-11 00:55:50.997155 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-11 00:55:50.997163 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-11 00:55:50.997171 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-11 00:55:50.997179 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-11 00:55:50.997187 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-11 00:55:50.997202 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-11 00:55:50.997210 | orchestrator | 2025-11-11 00:55:50.997218 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-11-11 00:55:50.997226 | orchestrator | Tuesday 11 November 2025 00:54:11 +0000 (0:00:01.901) 0:00:26.983 ****** 2025-11-11 00:55:50.997234 | orchestrator | skipping: [testbed-node-3] 2025-11-11 00:55:50.997242 | orchestrator | skipping: [testbed-node-4] 2025-11-11 00:55:50.997250 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-11-11 00:55:50.997258 | orchestrator | 2025-11-11 00:55:50.997266 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-11-11 00:55:50.997274 | orchestrator | Tuesday 11 November 2025 00:54:11 +0000 (0:00:00.402) 0:00:27.386 ****** 2025-11-11 00:55:50.997283 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:55:50.997292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:55:50.997301 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:55:50.997316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:55:50.997324 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-11 00:55:50.997333 | orchestrator | 2025-11-11 00:55:50.997340 | orchestrator | TASK [generate keys] *********************************************************** 2025-11-11 00:55:50.997348 | orchestrator | Tuesday 11 November 2025 00:54:55 +0000 (0:00:43.824) 0:01:11.210 ****** 2025-11-11 00:55:50.997356 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997364 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997395 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997403 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-11-11 00:55:50.997411 | orchestrator | 2025-11-11 00:55:50.997419 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-11-11 00:55:50.997426 | orchestrator | Tuesday 11 November 2025 00:55:19 +0000 (0:00:23.383) 0:01:34.594 ****** 2025-11-11 00:55:50.997435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997489 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997497 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997513 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-11 00:55:50.997520 | orchestrator | 2025-11-11 00:55:50.997528 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-11-11 00:55:50.997536 | orchestrator | Tuesday 11 November 2025 00:55:30 +0000 (0:00:11.266) 0:01:45.860 ****** 2025-11-11 00:55:50.997544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997552 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997567 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997575 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997659 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997670 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997678 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997686 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997709 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997717 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997733 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997741 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997749 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-11 00:55:50.997756 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-11 00:55:50.997764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-11 00:55:50.997772 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-11-11 00:55:50.997780 | orchestrator | 2025-11-11 00:55:50.997788 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:55:50.997796 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-11-11 00:55:50.997805 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-11 00:55:50.997813 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-11 00:55:50.997821 | orchestrator | 2025-11-11 00:55:50.997829 | orchestrator | 2025-11-11 00:55:50.997837 | orchestrator | 2025-11-11 00:55:50.997845 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:55:50.997852 | orchestrator | Tuesday 11 November 2025 00:55:47 +0000 (0:00:17.234) 0:02:03.095 ****** 2025-11-11 00:55:50.997860 | orchestrator | =============================================================================== 2025-11-11 00:55:50.997868 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.82s 2025-11-11 00:55:50.997876 | orchestrator | generate keys ---------------------------------------------------------- 23.38s 2025-11-11 00:55:50.997883 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.23s 2025-11-11 00:55:50.997891 | orchestrator | get keys from monitors ------------------------------------------------- 11.27s 2025-11-11 00:55:50.997899 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.99s 2025-11-11 00:55:50.997907 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2025-11-11 00:55:50.997914 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2025-11-11 00:55:50.997922 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2025-11-11 00:55:50.997930 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2025-11-11 00:55:50.997938 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.69s 2025-11-11 00:55:50.997946 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-11-11 00:55:50.997954 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-11-11 00:55:50.997961 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.66s 2025-11-11 00:55:50.997969 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-11-11 00:55:50.997977 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-11-11 00:55:50.997985 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2025-11-11 00:55:50.997998 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-11-11 00:55:50.998064 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.57s 2025-11-11 00:55:50.998077 | orchestrator | ceph-facts : Set_fact fsid ---------------------------------------------- 0.56s 2025-11-11 00:55:50.998085 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.52s 2025-11-11 00:55:50.998094 | orchestrator | 2025-11-11 00:55:50 | INFO  | Task fb8680da-6344-40d2-9d46-a1e9e03a45cd is in state SUCCESS 2025-11-11 00:55:50.998102 | orchestrator | 2025-11-11 00:55:50 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:55:50.998110 | orchestrator | 2025-11-11 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:54.045889 | orchestrator | 2025-11-11 00:55:54 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:55:54.046003 | orchestrator | 2025-11-11 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:55:57.085378 | orchestrator | 2025-11-11 00:55:57 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:55:57.085559 | orchestrator | 2025-11-11 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:00.130782 | orchestrator | 2025-11-11 00:56:00 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:00.130896 | orchestrator | 2025-11-11 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:03.176589 | orchestrator | 2025-11-11 00:56:03 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:03.176689 | orchestrator | 2025-11-11 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:06.216863 | orchestrator | 2025-11-11 00:56:06 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:06.216997 | orchestrator | 2025-11-11 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:09.271146 | orchestrator | 2025-11-11 00:56:09 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:09.271259 | orchestrator | 2025-11-11 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:12.312781 | orchestrator | 2025-11-11 00:56:12 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:12.312900 | orchestrator | 2025-11-11 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:15.363137 | orchestrator | 2025-11-11 00:56:15 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:15.363248 | orchestrator | 2025-11-11 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:18.407159 | orchestrator | 2025-11-11 00:56:18 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:18.407245 | orchestrator | 2025-11-11 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:21.447611 | orchestrator | 2025-11-11 00:56:21 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:21.447738 | orchestrator | 2025-11-11 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:24.488594 | orchestrator | 2025-11-11 00:56:24 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state STARTED 2025-11-11 00:56:24.488708 | orchestrator | 2025-11-11 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:27.531669 | orchestrator | 2025-11-11 00:56:27 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:27.531936 | orchestrator | 2025-11-11 00:56:27 | INFO  | Task 384fabda-2a82-475e-b240-ae3cc1d013d4 is in state SUCCESS 2025-11-11 00:56:27.532358 | orchestrator | 2025-11-11 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:30.575678 | orchestrator | 2025-11-11 00:56:30 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:30.575862 | orchestrator | 2025-11-11 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:33.611418 | orchestrator | 2025-11-11 00:56:33 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:33.611586 | orchestrator | 2025-11-11 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:36.659592 | orchestrator | 2025-11-11 00:56:36 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:36.659705 | orchestrator | 2025-11-11 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:39.709454 | orchestrator | 2025-11-11 00:56:39 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:39.709659 | orchestrator | 2025-11-11 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:42.748144 | orchestrator | 2025-11-11 00:56:42 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:42.748256 | orchestrator | 2025-11-11 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:45.798328 | orchestrator | 2025-11-11 00:56:45 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:45.798438 | orchestrator | 2025-11-11 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:48.843265 | orchestrator | 2025-11-11 00:56:48 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:48.843406 | orchestrator | 2025-11-11 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:51.882554 | orchestrator | 2025-11-11 00:56:51 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:51.882666 | orchestrator | 2025-11-11 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:54.916839 | orchestrator | 2025-11-11 00:56:54 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:54.916954 | orchestrator | 2025-11-11 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:56:57.956691 | orchestrator | 2025-11-11 00:56:57 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:56:57.956787 | orchestrator | 2025-11-11 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:00.996943 | orchestrator | 2025-11-11 00:57:00 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:00.997030 | orchestrator | 2025-11-11 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:04.049139 | orchestrator | 2025-11-11 00:57:04 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:04.049225 | orchestrator | 2025-11-11 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:07.087736 | orchestrator | 2025-11-11 00:57:07 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:07.087830 | orchestrator | 2025-11-11 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:10.130332 | orchestrator | 2025-11-11 00:57:10 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:10.130439 | orchestrator | 2025-11-11 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:13.176870 | orchestrator | 2025-11-11 00:57:13 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:13.176984 | orchestrator | 2025-11-11 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:16.221465 | orchestrator | 2025-11-11 00:57:16 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:16.221622 | orchestrator | 2025-11-11 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:19.270117 | orchestrator | 2025-11-11 00:57:19 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:19.270227 | orchestrator | 2025-11-11 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:22.310732 | orchestrator | 2025-11-11 00:57:22 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state STARTED 2025-11-11 00:57:22.310853 | orchestrator | 2025-11-11 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:25.358797 | orchestrator | 2025-11-11 00:57:25.358941 | orchestrator | 2025-11-11 00:57:25.358966 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-11-11 00:57:25.358987 | orchestrator | 2025-11-11 00:57:25.359007 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-11-11 00:57:25.359028 | orchestrator | Tuesday 11 November 2025 00:55:52 +0000 (0:00:00.153) 0:00:00.153 ****** 2025-11-11 00:57:25.359047 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-11 00:57:25.359067 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359085 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359104 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-11 00:57:25.359122 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359141 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-11 00:57:25.359186 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-11 00:57:25.359205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-11 00:57:25.359224 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-11 00:57:25.359242 | orchestrator | 2025-11-11 00:57:25.359331 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-11-11 00:57:25.359353 | orchestrator | Tuesday 11 November 2025 00:55:56 +0000 (0:00:04.364) 0:00:04.518 ****** 2025-11-11 00:57:25.359374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-11 00:57:25.359396 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359416 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359436 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-11 00:57:25.359456 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-11 00:57:25.359527 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-11 00:57:25.359546 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-11 00:57:25.359566 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-11 00:57:25.359586 | orchestrator | 2025-11-11 00:57:25.359606 | orchestrator | TASK [Create share directory] ************************************************** 2025-11-11 00:57:25.359665 | orchestrator | Tuesday 11 November 2025 00:56:00 +0000 (0:00:03.926) 0:00:08.445 ****** 2025-11-11 00:57:25.359685 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-11 00:57:25.359704 | orchestrator | 2025-11-11 00:57:25.359723 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-11-11 00:57:25.359742 | orchestrator | Tuesday 11 November 2025 00:56:01 +0000 (0:00:00.953) 0:00:09.398 ****** 2025-11-11 00:57:25.359761 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-11-11 00:57:25.359780 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359799 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359817 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-11-11 00:57:25.359835 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.359854 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-11-11 00:57:25.359873 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-11-11 00:57:25.359892 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-11-11 00:57:25.359910 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-11-11 00:57:25.359929 | orchestrator | 2025-11-11 00:57:25.359948 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-11-11 00:57:25.359966 | orchestrator | Tuesday 11 November 2025 00:56:14 +0000 (0:00:12.660) 0:00:22.058 ****** 2025-11-11 00:57:25.359985 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-11-11 00:57:25.360004 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-11-11 00:57:25.360023 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-11 00:57:25.360042 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-11 00:57:25.360090 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-11 00:57:25.360108 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-11 00:57:25.360127 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-11-11 00:57:25.360146 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-11-11 00:57:25.360164 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-11-11 00:57:25.360182 | orchestrator | 2025-11-11 00:57:25.360199 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-11-11 00:57:25.360218 | orchestrator | Tuesday 11 November 2025 00:56:18 +0000 (0:00:03.909) 0:00:25.968 ****** 2025-11-11 00:57:25.360239 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-11-11 00:57:25.360258 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.360277 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.360296 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-11-11 00:57:25.360326 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-11 00:57:25.360345 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-11-11 00:57:25.360363 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-11-11 00:57:25.360382 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-11-11 00:57:25.360413 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-11-11 00:57:25.360432 | orchestrator | 2025-11-11 00:57:25.360450 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:57:25.360469 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:57:25.360516 | orchestrator | 2025-11-11 00:57:25.360536 | orchestrator | 2025-11-11 00:57:25.360555 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:57:25.360574 | orchestrator | Tuesday 11 November 2025 00:56:24 +0000 (0:00:06.748) 0:00:32.717 ****** 2025-11-11 00:57:25.360593 | orchestrator | =============================================================================== 2025-11-11 00:57:25.360612 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.66s 2025-11-11 00:57:25.360630 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.75s 2025-11-11 00:57:25.360649 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.36s 2025-11-11 00:57:25.360668 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.93s 2025-11-11 00:57:25.360687 | orchestrator | Check if target directories exist --------------------------------------- 3.91s 2025-11-11 00:57:25.360706 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2025-11-11 00:57:25.360725 | orchestrator | 2025-11-11 00:57:25.360743 | orchestrator | 2025-11-11 00:57:25.360761 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-11-11 00:57:25.360779 | orchestrator | 2025-11-11 00:57:25.360797 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-11-11 00:57:25.360814 | orchestrator | Tuesday 11 November 2025 00:56:29 +0000 (0:00:00.234) 0:00:00.234 ****** 2025-11-11 00:57:25.360830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-11-11 00:57:25.360849 | orchestrator | 2025-11-11 00:57:25.360865 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-11-11 00:57:25.360881 | orchestrator | Tuesday 11 November 2025 00:56:29 +0000 (0:00:00.226) 0:00:00.461 ****** 2025-11-11 00:57:25.360898 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-11-11 00:57:25.360914 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-11-11 00:57:25.360930 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-11-11 00:57:25.360946 | orchestrator | 2025-11-11 00:57:25.360964 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-11-11 00:57:25.360981 | orchestrator | Tuesday 11 November 2025 00:56:30 +0000 (0:00:01.237) 0:00:01.699 ****** 2025-11-11 00:57:25.360999 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-11-11 00:57:25.361016 | orchestrator | 2025-11-11 00:57:25.361033 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-11-11 00:57:25.361050 | orchestrator | Tuesday 11 November 2025 00:56:32 +0000 (0:00:01.440) 0:00:03.139 ****** 2025-11-11 00:57:25.361067 | orchestrator | changed: [testbed-manager] 2025-11-11 00:57:25.361083 | orchestrator | 2025-11-11 00:57:25.361099 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-11-11 00:57:25.361116 | orchestrator | Tuesday 11 November 2025 00:56:33 +0000 (0:00:00.924) 0:00:04.064 ****** 2025-11-11 00:57:25.361132 | orchestrator | changed: [testbed-manager] 2025-11-11 00:57:25.361148 | orchestrator | 2025-11-11 00:57:25.361164 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-11-11 00:57:25.361182 | orchestrator | Tuesday 11 November 2025 00:56:34 +0000 (0:00:00.918) 0:00:04.982 ****** 2025-11-11 00:57:25.361200 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-11-11 00:57:25.361217 | orchestrator | ok: [testbed-manager] 2025-11-11 00:57:25.361247 | orchestrator | 2025-11-11 00:57:25.361265 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-11-11 00:57:25.361297 | orchestrator | Tuesday 11 November 2025 00:57:14 +0000 (0:00:40.529) 0:00:45.511 ****** 2025-11-11 00:57:25.361314 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-11-11 00:57:25.361332 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-11-11 00:57:25.361349 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-11-11 00:57:25.361367 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-11-11 00:57:25.361384 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-11-11 00:57:25.361400 | orchestrator | 2025-11-11 00:57:25.361417 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-11-11 00:57:25.361433 | orchestrator | Tuesday 11 November 2025 00:57:18 +0000 (0:00:04.205) 0:00:49.717 ****** 2025-11-11 00:57:25.361449 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-11-11 00:57:25.361466 | orchestrator | 2025-11-11 00:57:25.361542 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-11-11 00:57:25.361565 | orchestrator | Tuesday 11 November 2025 00:57:19 +0000 (0:00:00.476) 0:00:50.193 ****** 2025-11-11 00:57:25.361583 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:57:25.361602 | orchestrator | 2025-11-11 00:57:25.361620 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-11-11 00:57:25.361638 | orchestrator | Tuesday 11 November 2025 00:57:19 +0000 (0:00:00.145) 0:00:50.339 ****** 2025-11-11 00:57:25.361667 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:57:25.361685 | orchestrator | 2025-11-11 00:57:25.361703 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-11-11 00:57:25.361720 | orchestrator | Tuesday 11 November 2025 00:57:19 +0000 (0:00:00.472) 0:00:50.812 ****** 2025-11-11 00:57:25.361737 | orchestrator | changed: [testbed-manager] 2025-11-11 00:57:25.361756 | orchestrator | 2025-11-11 00:57:25.361774 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-11-11 00:57:25.361792 | orchestrator | Tuesday 11 November 2025 00:57:21 +0000 (0:00:01.617) 0:00:52.430 ****** 2025-11-11 00:57:25.361808 | orchestrator | changed: [testbed-manager] 2025-11-11 00:57:25.361825 | orchestrator | 2025-11-11 00:57:25.361841 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-11-11 00:57:25.361858 | orchestrator | Tuesday 11 November 2025 00:57:22 +0000 (0:00:00.773) 0:00:53.203 ****** 2025-11-11 00:57:25.361874 | orchestrator | changed: [testbed-manager] 2025-11-11 00:57:25.361891 | orchestrator | 2025-11-11 00:57:25.361907 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-11-11 00:57:25.361922 | orchestrator | Tuesday 11 November 2025 00:57:22 +0000 (0:00:00.648) 0:00:53.851 ****** 2025-11-11 00:57:25.361937 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-11-11 00:57:25.361953 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-11-11 00:57:25.361969 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-11-11 00:57:25.361985 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-11-11 00:57:25.362002 | orchestrator | 2025-11-11 00:57:25.362093 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:57:25.362121 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-11 00:57:25.362140 | orchestrator | 2025-11-11 00:57:25.362157 | orchestrator | 2025-11-11 00:57:25.362174 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:57:25.362192 | orchestrator | Tuesday 11 November 2025 00:57:24 +0000 (0:00:01.481) 0:00:55.333 ****** 2025-11-11 00:57:25.362209 | orchestrator | =============================================================================== 2025-11-11 00:57:25.362226 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.53s 2025-11-11 00:57:25.362244 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2025-11-11 00:57:25.362277 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.62s 2025-11-11 00:57:25.362296 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-11-11 00:57:25.362314 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2025-11-11 00:57:25.362333 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.24s 2025-11-11 00:57:25.362351 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2025-11-11 00:57:25.362371 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2025-11-11 00:57:25.362390 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2025-11-11 00:57:25.362407 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-11-11 00:57:25.362423 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-11-11 00:57:25.362440 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.47s 2025-11-11 00:57:25.362456 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-11-11 00:57:25.362473 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-11-11 00:57:25.362522 | orchestrator | 2025-11-11 00:57:25 | INFO  | Task 439ab61e-5035-4224-9b85-9a8d732d11c7 is in state SUCCESS 2025-11-11 00:57:25.362554 | orchestrator | 2025-11-11 00:57:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:57:28.404468 | orchestrator | 2025-11-11 00:57:28 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:28.404651 | orchestrator | 2025-11-11 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:31.447156 | orchestrator | 2025-11-11 00:57:31 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:31.447279 | orchestrator | 2025-11-11 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:34.488352 | orchestrator | 2025-11-11 00:57:34 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:34.488565 | orchestrator | 2025-11-11 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:37.532372 | orchestrator | 2025-11-11 00:57:37 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:37.532572 | orchestrator | 2025-11-11 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:40.574239 | orchestrator | 2025-11-11 00:57:40 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:40.574343 | orchestrator | 2025-11-11 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:43.616577 | orchestrator | 2025-11-11 00:57:43 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:43.616708 | orchestrator | 2025-11-11 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:46.666535 | orchestrator | 2025-11-11 00:57:46 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:46.666604 | orchestrator | 2025-11-11 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:49.709255 | orchestrator | 2025-11-11 00:57:49 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:49.709353 | orchestrator | 2025-11-11 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:52.746469 | orchestrator | 2025-11-11 00:57:52 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:52.746605 | orchestrator | 2025-11-11 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:55.790207 | orchestrator | 2025-11-11 00:57:55 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:55.790303 | orchestrator | 2025-11-11 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:57:58.835057 | orchestrator | 2025-11-11 00:57:58 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:57:58.835175 | orchestrator | 2025-11-11 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:01.876168 | orchestrator | 2025-11-11 00:58:01 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:01.876305 | orchestrator | 2025-11-11 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:04.917211 | orchestrator | 2025-11-11 00:58:04 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:04.917312 | orchestrator | 2025-11-11 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:07.957389 | orchestrator | 2025-11-11 00:58:07 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:07.957487 | orchestrator | 2025-11-11 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:11.000552 | orchestrator | 2025-11-11 00:58:10 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:11.000665 | orchestrator | 2025-11-11 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:14.052988 | orchestrator | 2025-11-11 00:58:14 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:14.053089 | orchestrator | 2025-11-11 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:17.094697 | orchestrator | 2025-11-11 00:58:17 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:17.094807 | orchestrator | 2025-11-11 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:20.133721 | orchestrator | 2025-11-11 00:58:20 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:20.133846 | orchestrator | 2025-11-11 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:23.172134 | orchestrator | 2025-11-11 00:58:23 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:23.172264 | orchestrator | 2025-11-11 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:26.209863 | orchestrator | 2025-11-11 00:58:26 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:26.209981 | orchestrator | 2025-11-11 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:29.253470 | orchestrator | 2025-11-11 00:58:29 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:29.253651 | orchestrator | 2025-11-11 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:32.298561 | orchestrator | 2025-11-11 00:58:32 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:32.298679 | orchestrator | 2025-11-11 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:35.340635 | orchestrator | 2025-11-11 00:58:35 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:35.340723 | orchestrator | 2025-11-11 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:38.383433 | orchestrator | 2025-11-11 00:58:38 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:38.383501 | orchestrator | 2025-11-11 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:41.424585 | orchestrator | 2025-11-11 00:58:41 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state STARTED 2025-11-11 00:58:41.424700 | orchestrator | 2025-11-11 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-11-11 00:58:44.471963 | orchestrator | 2025-11-11 00:58:44 | INFO  | Task 7b3ae8ed-a514-4e5f-9eeb-9017d3ef21b2 is in state SUCCESS 2025-11-11 00:58:44.472040 | orchestrator | 2025-11-11 00:58:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:58:47.517948 | orchestrator | 2025-11-11 00:58:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:58:50.559623 | orchestrator | 2025-11-11 00:58:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:58:53.599038 | orchestrator | 2025-11-11 00:58:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:58:56.634989 | orchestrator | 2025-11-11 00:58:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:58:59.677126 | orchestrator | 2025-11-11 00:58:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:02.713724 | orchestrator | 2025-11-11 00:59:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:05.755029 | orchestrator | 2025-11-11 00:59:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:08.797936 | orchestrator | 2025-11-11 00:59:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:11.835293 | orchestrator | 2025-11-11 00:59:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:14.877703 | orchestrator | 2025-11-11 00:59:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:17.915219 | orchestrator | 2025-11-11 00:59:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:20.954878 | orchestrator | 2025-11-11 00:59:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:23.991967 | orchestrator | 2025-11-11 00:59:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:27.033285 | orchestrator | 2025-11-11 00:59:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:30.082265 | orchestrator | 2025-11-11 00:59:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:33.123100 | orchestrator | 2025-11-11 00:59:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:36.168454 | orchestrator | 2025-11-11 00:59:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:39.209688 | orchestrator | 2025-11-11 00:59:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-11 00:59:42.251824 | orchestrator | 2025-11-11 00:59:42.251891 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-11 00:59:42.251898 | orchestrator | 2.16.14 2025-11-11 00:59:42.251903 | orchestrator | 2025-11-11 00:59:42.251908 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-11-11 00:59:42.251913 | orchestrator | 2025-11-11 00:59:42.251918 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-11-11 00:59:42.251923 | orchestrator | Tuesday 11 November 2025 00:57:28 +0000 (0:00:00.254) 0:00:00.254 ****** 2025-11-11 00:59:42.251928 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.251932 | orchestrator | 2025-11-11 00:59:42.251937 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-11-11 00:59:42.251941 | orchestrator | Tuesday 11 November 2025 00:57:30 +0000 (0:00:01.637) 0:00:01.891 ****** 2025-11-11 00:59:42.251945 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.251949 | orchestrator | 2025-11-11 00:59:42.251953 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-11-11 00:59:42.251972 | orchestrator | Tuesday 11 November 2025 00:57:31 +0000 (0:00:01.226) 0:00:03.118 ****** 2025-11-11 00:59:42.251976 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.251981 | orchestrator | 2025-11-11 00:59:42.251985 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-11-11 00:59:42.251989 | orchestrator | Tuesday 11 November 2025 00:57:32 +0000 (0:00:01.071) 0:00:04.189 ****** 2025-11-11 00:59:42.251993 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.251997 | orchestrator | 2025-11-11 00:59:42.252001 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-11-11 00:59:42.252005 | orchestrator | Tuesday 11 November 2025 00:57:34 +0000 (0:00:01.130) 0:00:05.320 ****** 2025-11-11 00:59:42.252009 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.252013 | orchestrator | 2025-11-11 00:59:42.252018 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-11-11 00:59:42.252022 | orchestrator | Tuesday 11 November 2025 00:57:35 +0000 (0:00:01.071) 0:00:06.391 ****** 2025-11-11 00:59:42.252026 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.252030 | orchestrator | 2025-11-11 00:59:42.252034 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-11-11 00:59:42.252038 | orchestrator | Tuesday 11 November 2025 00:57:36 +0000 (0:00:01.088) 0:00:07.479 ****** 2025-11-11 00:59:42.252042 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.252046 | orchestrator | 2025-11-11 00:59:42.252051 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-11-11 00:59:42.252055 | orchestrator | Tuesday 11 November 2025 00:57:38 +0000 (0:00:02.073) 0:00:09.553 ****** 2025-11-11 00:59:42.252059 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.252063 | orchestrator | 2025-11-11 00:59:42.252067 | orchestrator | TASK [Create admin user] ******************************************************* 2025-11-11 00:59:42.252080 | orchestrator | Tuesday 11 November 2025 00:57:39 +0000 (0:00:01.245) 0:00:10.798 ****** 2025-11-11 00:59:42.252084 | orchestrator | changed: [testbed-manager] 2025-11-11 00:59:42.252088 | orchestrator | 2025-11-11 00:59:42.252093 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-11-11 00:59:42.252097 | orchestrator | Tuesday 11 November 2025 00:58:17 +0000 (0:00:38.129) 0:00:48.928 ****** 2025-11-11 00:59:42.252101 | orchestrator | skipping: [testbed-manager] 2025-11-11 00:59:42.252105 | orchestrator | 2025-11-11 00:59:42.252109 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-11 00:59:42.252113 | orchestrator | 2025-11-11 00:59:42.252117 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-11 00:59:42.252121 | orchestrator | Tuesday 11 November 2025 00:58:17 +0000 (0:00:00.158) 0:00:49.087 ****** 2025-11-11 00:59:42.252126 | orchestrator | changed: [testbed-node-0] 2025-11-11 00:59:42.252130 | orchestrator | 2025-11-11 00:59:42.252134 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-11 00:59:42.252138 | orchestrator | 2025-11-11 00:59:42.252142 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-11 00:59:42.252146 | orchestrator | Tuesday 11 November 2025 00:58:29 +0000 (0:00:11.615) 0:01:00.703 ****** 2025-11-11 00:59:42.252150 | orchestrator | changed: [testbed-node-1] 2025-11-11 00:59:42.252154 | orchestrator | 2025-11-11 00:59:42.252159 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-11 00:59:42.252163 | orchestrator | 2025-11-11 00:59:42.252167 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-11 00:59:42.252171 | orchestrator | Tuesday 11 November 2025 00:58:30 +0000 (0:00:01.244) 0:01:01.947 ****** 2025-11-11 00:59:42.252175 | orchestrator | changed: [testbed-node-2] 2025-11-11 00:59:42.252179 | orchestrator | 2025-11-11 00:59:42.252183 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-11 00:59:42.252188 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-11 00:59:42.252198 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:59:42.252202 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:59:42.252215 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-11 00:59:42.252224 | orchestrator | 2025-11-11 00:59:42.252228 | orchestrator | 2025-11-11 00:59:42.252232 | orchestrator | 2025-11-11 00:59:42.252237 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-11 00:59:42.252241 | orchestrator | Tuesday 11 November 2025 00:58:41 +0000 (0:00:11.133) 0:01:13.081 ****** 2025-11-11 00:59:42.252245 | orchestrator | =============================================================================== 2025-11-11 00:59:42.252249 | orchestrator | Create admin user ------------------------------------------------------ 38.13s 2025-11-11 00:59:42.252262 | orchestrator | Restart ceph manager service ------------------------------------------- 23.99s 2025-11-11 00:59:42.252266 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2025-11-11 00:59:42.252270 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.64s 2025-11-11 00:59:42.252274 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.25s 2025-11-11 00:59:42.252278 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.23s 2025-11-11 00:59:42.252283 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.13s 2025-11-11 00:59:42.252287 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2025-11-11 00:59:42.252291 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2025-11-11 00:59:42.252295 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2025-11-11 00:59:42.252299 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-11-11 00:59:42.252303 | orchestrator | 2025-11-11 00:59:42.382581 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.382673 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.382687 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.382699 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.382711 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.382721 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.382901 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.382983 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.382999 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1422, in get_connection 2025-11-11 00:59:42.383014 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.383027 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.383041 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.383071 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.384349 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.384373 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.384382 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.384390 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.384417 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.385065 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.385093 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.385106 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.385119 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1422, in get_connection 2025-11-11 00:59:42.385132 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.385144 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.385157 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.385170 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.386298 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.386322 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.386330 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.386338 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.386589 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.386614 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.386629 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.386642 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.386655 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1432, in get_connection 2025-11-11 00:59:42.386889 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.386913 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.386926 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.386939 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.388247 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.388266 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.388273 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.388394 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.388407 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.388414 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.388422 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.388429 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.388638 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1432, in get_connection 2025-11-11 00:59:42.388654 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.388848 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.388861 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.388883 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.390167 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.390194 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.390205 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.390216 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.391307 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.391334 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.391357 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.391370 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.391380 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1432, in get_connection 2025-11-11 00:59:42.391390 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.391402 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.391415 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.391426 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.392977 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.393000 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.393012 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.393025 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.393035 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.393061 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.393074 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.393085 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.393143 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1432, in get_connection 2025-11-11 00:59:42.393158 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.393169 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.393179 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.393190 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.394606 | orchestrator | Exception ignored in: 2025-11-11 00:59:42.394627 | orchestrator | Traceback (most recent call last): 2025-11-11 00:59:42.394639 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/result.py", line 417, in __del__ 2025-11-11 00:59:42.394650 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 208, in remove_pending_result 2025-11-11 00:59:42.394662 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/asynchronous.py", line 216, in on_result_fulfilled 2025-11-11 00:59:42.395335 | orchestrator | File "/usr/local/lib/python3.13/site-packages/celery/backends/redis.py", line 184, in cancel_for 2025-11-11 00:59:42.395358 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 982, in unsubscribe 2025-11-11 00:59:42.395371 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/client.py", line 786, in execute_command 2025-11-11 00:59:42.395400 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 1432, in get_connection 2025-11-11 00:59:42.395413 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 369, in connect 2025-11-11 00:59:42.395424 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/connection.py", line 400, in on_connect 2025-11-11 00:59:42.395436 | orchestrator | File "/usr/local/lib/python3.13/site-packages/redis/_parsers/hiredis.py", line 51, in on_connect 2025-11-11 00:59:42.395449 | orchestrator | ImportError: sys.meta_path is None, Python is likely shutting down 2025-11-11 00:59:42.563323 | orchestrator | 2025-11-11 00:59:42.568368 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Nov 11 00:59:42 UTC 2025 2025-11-11 00:59:42.568431 | orchestrator | 2025-11-11 00:59:42.878662 | orchestrator | ok: Runtime: 0:21:25.519430 2025-11-11 00:59:43.005605 | 2025-11-11 00:59:43.005846 | TASK [Bootstrap services] 2025-11-11 00:59:43.732474 | orchestrator | 2025-11-11 00:59:43.732700 | orchestrator | # BOOTSTRAP 2025-11-11 00:59:43.732726 | orchestrator | 2025-11-11 00:59:43.732740 | orchestrator | + set -e 2025-11-11 00:59:43.732755 | orchestrator | + echo 2025-11-11 00:59:43.732768 | orchestrator | + echo '# BOOTSTRAP' 2025-11-11 00:59:43.732786 | orchestrator | + echo 2025-11-11 00:59:43.732844 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-11-11 00:59:43.740496 | orchestrator | + set -e 2025-11-11 00:59:43.740653 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-11-11 00:59:45.847887 | orchestrator | 2025-11-11 00:59:45 | INFO  | It takes a moment until task ec73bd16-02b3-4379-af4d-47c4cb122668 (flavor-manager) has been started and output is visible here. 2025-11-11 00:59:55.939795 | orchestrator | Failed to discover available identity versions when contacting https://api.testbed.osism.xyz:5000/v3. Attempting to parse version from URL. 2025-11-11 00:59:55.939981 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-11-11 00:59:55.940002 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connection.py:198 in │ 2025-11-11 00:59:55.940025 | orchestrator | │ _new_conn │ 2025-11-11 00:59:55.940036 | orchestrator | │ │ 2025-11-11 00:59:55.940048 | orchestrator | │ 195 │ │ :return: New socket connection. │ 2025-11-11 00:59:55.940075 | orchestrator | │ 196 │ │ """ │ 2025-11-11 00:59:55.940088 | orchestrator | │ 197 │ │ try: │ 2025-11-11 00:59:55.940099 | orchestrator | │ ❱ 198 │ │ │ sock = connection.create_connection( │ 2025-11-11 00:59:55.940110 | orchestrator | │ 199 │ │ │ │ (self._dns_host, self.port), │ 2025-11-11 00:59:55.940121 | orchestrator | │ 200 │ │ │ │ self.timeout, │ 2025-11-11 00:59:55.940132 | orchestrator | │ 201 │ │ │ │ source_address=self.source_address, │ 2025-11-11 00:59:55.940143 | orchestrator | │ │ 2025-11-11 00:59:55.940171 | orchestrator | │ ╭─────────────────────────────── locals ───────────────────────────────╮ │ 2025-11-11 00:59:55.940208 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:55.940221 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.940234 | orchestrator | │ │ 2025-11-11 00:59:55.940245 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/util/connection.py:85 in │ 2025-11-11 00:59:55.940256 | orchestrator | │ create_connection │ 2025-11-11 00:59:55.940267 | orchestrator | │ │ 2025-11-11 00:59:55.940278 | orchestrator | │ 82 │ │ 2025-11-11 00:59:55.940289 | orchestrator | │ 83 │ if err is not None: │ 2025-11-11 00:59:55.940300 | orchestrator | │ 84 │ │ try: │ 2025-11-11 00:59:55.940311 | orchestrator | │ ❱ 85 │ │ │ raise err │ 2025-11-11 00:59:55.940322 | orchestrator | │ 86 │ │ finally: │ 2025-11-11 00:59:55.940332 | orchestrator | │ 87 │ │ │ # Break explicitly a reference cycle │ 2025-11-11 00:59:55.940373 | orchestrator | │ 88 │ │ │ err = None │ 2025-11-11 00:59:55.940385 | orchestrator | │ │ 2025-11-11 00:59:55.940396 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:55.940407 | orchestrator | │ │ address = ('api.testbed.osism.xyz', 5000) │ │ 2025-11-11 00:59:55.940418 | orchestrator | │ │ af = │ │ 2025-11-11 00:59:55.940429 | orchestrator | │ │ canonname = 'api.testbed.osism.xyz' │ │ 2025-11-11 00:59:55.940439 | orchestrator | │ │ err = None │ │ 2025-11-11 00:59:55.940450 | orchestrator | │ │ family = │ │ 2025-11-11 00:59:55.940461 | orchestrator | │ │ host = 'api.testbed.osism.xyz' │ │ 2025-11-11 00:59:55.940472 | orchestrator | │ │ port = 5000 │ │ 2025-11-11 00:59:55.940482 | orchestrator | │ │ proto = 6 │ │ 2025-11-11 00:59:55.940493 | orchestrator | │ │ res = ( │ │ 2025-11-11 00:59:55.940504 | orchestrator | │ │ │ , │ │ 2025-11-11 00:59:55.940514 | orchestrator | │ │ │ , │ │ 2025-11-11 00:59:55.940580 | orchestrator | │ │ │ 6, │ │ 2025-11-11 00:59:55.940593 | orchestrator | │ │ │ 'api.testbed.osism.xyz', │ │ 2025-11-11 00:59:55.940604 | orchestrator | │ │ │ ('192.168.16.254', 5000) │ │ 2025-11-11 00:59:55.940615 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:55.940626 | orchestrator | │ │ sa = ('192.168.16.254', 5000) │ │ 2025-11-11 00:59:55.940637 | orchestrator | │ │ sock = │ │ 2025-11-11 00:59:55.940665 | orchestrator | │ │ socket_options = [ │ │ 2025-11-11 00:59:55.940676 | orchestrator | │ │ │ (6, 1, 1), │ │ 2025-11-11 00:59:55.940687 | orchestrator | │ │ │ (1, 9, 1), │ │ 2025-11-11 00:59:55.940698 | orchestrator | │ │ │ (6, 4, 60), │ │ 2025-11-11 00:59:55.940709 | orchestrator | │ │ │ (6, 6, 4), │ │ 2025-11-11 00:59:55.940720 | orchestrator | │ │ │ (6, 5, 15) │ │ 2025-11-11 00:59:55.940731 | orchestrator | │ │ ] │ │ 2025-11-11 00:59:55.940743 | orchestrator | │ │ socktype = │ │ 2025-11-11 00:59:55.940754 | orchestrator | │ │ source_address = None │ │ 2025-11-11 00:59:55.940764 | orchestrator | │ │ timeout = None │ │ 2025-11-11 00:59:55.940775 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.940797 | orchestrator | │ │ 2025-11-11 00:59:55.940808 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/util/connection.py:73 in │ 2025-11-11 00:59:55.940819 | orchestrator | │ create_connection │ 2025-11-11 00:59:55.940830 | orchestrator | │ │ 2025-11-11 00:59:55.940841 | orchestrator | │ 70 │ │ │ │ sock.settimeout(timeout) │ 2025-11-11 00:59:55.940852 | orchestrator | │ 71 │ │ │ if source_address: │ 2025-11-11 00:59:55.940863 | orchestrator | │ 72 │ │ │ │ sock.bind(source_address) │ 2025-11-11 00:59:55.940874 | orchestrator | │ ❱ 73 │ │ │ sock.connect(sa) │ 2025-11-11 00:59:55.940885 | orchestrator | │ 74 │ │ │ # Break explicitly a reference cycle │ 2025-11-11 00:59:55.940896 | orchestrator | │ 75 │ │ │ err = None │ 2025-11-11 00:59:55.940907 | orchestrator | │ 76 │ │ │ return sock │ 2025-11-11 00:59:55.940917 | orchestrator | │ │ 2025-11-11 00:59:55.940928 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:55.940940 | orchestrator | │ │ address = ('api.testbed.osism.xyz', 5000) │ │ 2025-11-11 00:59:55.940950 | orchestrator | │ │ af = │ │ 2025-11-11 00:59:55.940961 | orchestrator | │ │ canonname = 'api.testbed.osism.xyz' │ │ 2025-11-11 00:59:55.940972 | orchestrator | │ │ err = None │ │ 2025-11-11 00:59:55.940983 | orchestrator | │ │ family = │ │ 2025-11-11 00:59:55.940994 | orchestrator | │ │ host = 'api.testbed.osism.xyz' │ │ 2025-11-11 00:59:55.941005 | orchestrator | │ │ port = 5000 │ │ 2025-11-11 00:59:55.941016 | orchestrator | │ │ proto = 6 │ │ 2025-11-11 00:59:55.941027 | orchestrator | │ │ res = ( │ │ 2025-11-11 00:59:55.941038 | orchestrator | │ │ │ , │ │ 2025-11-11 00:59:55.941054 | orchestrator | │ │ │ , │ │ 2025-11-11 00:59:55.941074 | orchestrator | │ │ │ 6, │ │ 2025-11-11 00:59:55.968945 | orchestrator | │ │ │ 'api.testbed.osism.xyz', │ │ 2025-11-11 00:59:55.969058 | orchestrator | │ │ │ ('192.168.16.254', 5000) │ │ 2025-11-11 00:59:55.969071 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:55.969083 | orchestrator | │ │ sa = ('192.168.16.254', 5000) │ │ 2025-11-11 00:59:55.969094 | orchestrator | │ │ sock = │ │ 2025-11-11 00:59:55.969117 | orchestrator | │ │ socket_options = [ │ │ 2025-11-11 00:59:55.969128 | orchestrator | │ │ │ (6, 1, 1), │ │ 2025-11-11 00:59:55.969139 | orchestrator | │ │ │ (1, 9, 1), │ │ 2025-11-11 00:59:55.969175 | orchestrator | │ │ │ (6, 4, 60), │ │ 2025-11-11 00:59:55.969189 | orchestrator | │ │ │ (6, 6, 4), │ │ 2025-11-11 00:59:55.969200 | orchestrator | │ │ │ (6, 5, 15) │ │ 2025-11-11 00:59:55.969211 | orchestrator | │ │ ] │ │ 2025-11-11 00:59:55.969222 | orchestrator | │ │ socktype = │ │ 2025-11-11 00:59:55.969248 | orchestrator | │ │ source_address = None │ │ 2025-11-11 00:59:55.969260 | orchestrator | │ │ timeout = None │ │ 2025-11-11 00:59:55.969283 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.969300 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-11-11 00:59:55.969313 | orchestrator | OSError: [Errno 113] Host is unreachable 2025-11-11 00:59:55.969333 | orchestrator | 2025-11-11 00:59:55.969353 | orchestrator | The above exception was the direct cause of the following exception: 2025-11-11 00:59:55.969390 | orchestrator | 2025-11-11 00:59:55.969410 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-11-11 00:59:55.969429 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connectionpool.py:787 in │ 2025-11-11 00:59:55.969446 | orchestrator | │ urlopen │ 2025-11-11 00:59:55.969465 | orchestrator | │ │ 2025-11-11 00:59:55.969484 | orchestrator | │ 784 │ │ │ response_conn = conn if not release_conn else None │ 2025-11-11 00:59:55.969500 | orchestrator | │ 785 │ │ │ │ 2025-11-11 00:59:55.969516 | orchestrator | │ 786 │ │ │ # Make the request on the HTTPConnection object │ 2025-11-11 00:59:55.969581 | orchestrator | │ ❱ 787 │ │ │ response = self._make_request( │ 2025-11-11 00:59:55.969602 | orchestrator | │ 788 │ │ │ │ conn, │ 2025-11-11 00:59:55.969621 | orchestrator | │ 789 │ │ │ │ method, │ 2025-11-11 00:59:55.969639 | orchestrator | │ 790 │ │ │ │ url, │ 2025-11-11 00:59:55.969664 | orchestrator | │ │ 2025-11-11 00:59:55.969685 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:55.969697 | orchestrator | │ │ assert_same_host = False │ │ 2025-11-11 00:59:55.969719 | orchestrator | │ │ body = '{"auth": {"identity": {"methods": ["password"], │ │ 2025-11-11 00:59:55.969731 | orchestrator | │ │ "password": {"user": {"password"'+137 │ │ 2025-11-11 00:59:55.969741 | orchestrator | │ │ body_pos = None │ │ 2025-11-11 00:59:55.969752 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:55.969762 | orchestrator | │ │ clean_exit = False │ │ 2025-11-11 00:59:55.969785 | orchestrator | │ │ conn = None │ │ 2025-11-11 00:59:55.969819 | orchestrator | │ │ decode_content = False │ │ 2025-11-11 00:59:55.969830 | orchestrator | │ │ destination_scheme = None │ │ 2025-11-11 00:59:55.969841 | orchestrator | │ │ err = None │ │ 2025-11-11 00:59:55.969852 | orchestrator | │ │ headers = {'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:55.969863 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:55.969874 | orchestrator | │ │ CPython/3.13.3', 'Accept-Encoding': 'gzip, │ │ 2025-11-11 00:59:55.969884 | orchestrator | │ │ deflate', 'Accept': 'application/json', │ │ 2025-11-11 00:59:55.969896 | orchestrator | │ │ 'Connection': 'keep-alive', 'Content-Type': │ │ 2025-11-11 00:59:55.969906 | orchestrator | │ │ 'application/json', 'Content-Length': '217'} │ │ 2025-11-11 00:59:55.969917 | orchestrator | │ │ http_tunnel_required = False │ │ 2025-11-11 00:59:55.969927 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:55.969938 | orchestrator | │ │ new_e = NewConnectionError(': Failed to establish a │ │ 2025-11-11 00:59:55.969960 | orchestrator | │ │ new connection: [Errno 113] Host is unreachable') │ │ 2025-11-11 00:59:55.969970 | orchestrator | │ │ parsed_url = Url( │ │ 2025-11-11 00:59:55.969981 | orchestrator | │ │ │ scheme=None, │ │ 2025-11-11 00:59:55.969991 | orchestrator | │ │ │ auth=None, │ │ 2025-11-11 00:59:55.970002 | orchestrator | │ │ │ host=None, │ │ 2025-11-11 00:59:55.970012 | orchestrator | │ │ │ port=None, │ │ 2025-11-11 00:59:55.970125 | orchestrator | │ │ │ path='/v3/auth/tokens', │ │ 2025-11-11 00:59:55.970137 | orchestrator | │ │ │ query=None, │ │ 2025-11-11 00:59:55.970148 | orchestrator | │ │ │ fragment=None │ │ 2025-11-11 00:59:55.970159 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:55.970169 | orchestrator | │ │ pool_timeout = None │ │ 2025-11-11 00:59:55.970180 | orchestrator | │ │ preload_content = False │ │ 2025-11-11 00:59:55.970191 | orchestrator | │ │ redirect = False │ │ 2025-11-11 00:59:55.970202 | orchestrator | │ │ release_conn = False │ │ 2025-11-11 00:59:55.970212 | orchestrator | │ │ release_this_conn = True │ │ 2025-11-11 00:59:55.970223 | orchestrator | │ │ response_conn = │ │ 2025-11-11 00:59:55.970244 | orchestrator | │ │ response_kw = {} │ │ 2025-11-11 00:59:55.970255 | orchestrator | │ │ retries = Retry(total=0, connect=None, read=False, │ │ 2025-11-11 00:59:55.970272 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:55.970283 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:55.970315 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.970325 | orchestrator | │ │ timeout_obj = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.970336 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:55.970347 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.970359 | orchestrator | │ │ 2025-11-11 00:59:55.970370 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connectionpool.py:488 in │ 2025-11-11 00:59:55.970380 | orchestrator | │ _make_request │ 2025-11-11 00:59:55.970391 | orchestrator | │ │ 2025-11-11 00:59:55.970402 | orchestrator | │ 485 │ │ │ │ new_e, (OSError, NewConnectionError, TimeoutError, SS │ 2025-11-11 00:59:55.970412 | orchestrator | │ 486 │ │ │ ) and (conn and conn.proxy and not conn.has_connected_to_ │ 2025-11-11 00:59:55.970433 | orchestrator | │ 487 │ │ │ │ new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) │ 2025-11-11 00:59:55.995240 | orchestrator | │ ❱ 488 │ │ │ raise new_e │ 2025-11-11 00:59:55.995351 | orchestrator | │ 489 │ │ │ 2025-11-11 00:59:55.995365 | orchestrator | │ 490 │ │ # conn.request() calls http.client.*.request, not the method │ 2025-11-11 00:59:55.995377 | orchestrator | │ 491 │ │ # urllib3.request. It also calls makefile (recv) on the socke │ 2025-11-11 00:59:55.995389 | orchestrator | │ │ 2025-11-11 00:59:55.995404 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:55.995419 | orchestrator | │ │ body = '{"auth": {"identity": {"methods": │ │ 2025-11-11 00:59:55.995430 | orchestrator | │ │ ["password"], "password": {"user": │ │ 2025-11-11 00:59:55.995440 | orchestrator | │ │ {"password"'+137 │ │ 2025-11-11 00:59:55.995451 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:55.995462 | orchestrator | │ │ conn = │ │ 2025-11-11 00:59:55.995483 | orchestrator | │ │ decode_content = False │ │ 2025-11-11 00:59:55.995493 | orchestrator | │ │ enforce_content_length = True │ │ 2025-11-11 00:59:55.995504 | orchestrator | │ │ headers = {'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:55.995515 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:55.995525 | orchestrator | │ │ CPython/3.13.3', 'Accept-Encoding': 'gzip, │ │ 2025-11-11 00:59:55.995553 | orchestrator | │ │ deflate', 'Accept': 'application/json', │ │ 2025-11-11 00:59:55.995564 | orchestrator | │ │ 'Connection': 'keep-alive', 'Content-Type': │ │ 2025-11-11 00:59:55.995575 | orchestrator | │ │ 'application/json', 'Content-Length': '217'} │ │ 2025-11-11 00:59:55.995611 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:55.995622 | orchestrator | │ │ new_e = NewConnectionError(': Failed to establish │ │ 2025-11-11 00:59:55.995643 | orchestrator | │ │ a new connection: [Errno 113] Host is │ │ 2025-11-11 00:59:55.995653 | orchestrator | │ │ unreachable') │ │ 2025-11-11 00:59:55.995664 | orchestrator | │ │ preload_content = False │ │ 2025-11-11 00:59:55.995675 | orchestrator | │ │ response_conn = │ │ 2025-11-11 00:59:55.995696 | orchestrator | │ │ retries = Retry(total=0, connect=None, read=False, │ │ 2025-11-11 00:59:55.995707 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:55.995732 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:55.995754 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.995765 | orchestrator | │ │ timeout_obj = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.995775 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:55.995787 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.995800 | orchestrator | │ │ 2025-11-11 00:59:55.995811 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connectionpool.py:464 in │ 2025-11-11 00:59:55.995821 | orchestrator | │ _make_request │ 2025-11-11 00:59:55.995832 | orchestrator | │ │ 2025-11-11 00:59:55.995843 | orchestrator | │ 461 │ │ try: │ 2025-11-11 00:59:55.995853 | orchestrator | │ 462 │ │ │ # Trigger any extra validation we need to do. │ 2025-11-11 00:59:55.995883 | orchestrator | │ 463 │ │ │ try: │ 2025-11-11 00:59:55.995895 | orchestrator | │ ❱ 464 │ │ │ │ self._validate_conn(conn) │ 2025-11-11 00:59:55.995905 | orchestrator | │ 465 │ │ │ except (SocketTimeout, BaseSSLError) as e: │ 2025-11-11 00:59:55.995917 | orchestrator | │ 466 │ │ │ │ self._raise_timeout(err=e, url=url, timeout_value=con │ 2025-11-11 00:59:55.995927 | orchestrator | │ 467 │ │ │ │ raise │ 2025-11-11 00:59:55.995938 | orchestrator | │ │ 2025-11-11 00:59:55.995949 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:55.995960 | orchestrator | │ │ body = '{"auth": {"identity": {"methods": │ │ 2025-11-11 00:59:55.995971 | orchestrator | │ │ ["password"], "password": {"user": │ │ 2025-11-11 00:59:55.995981 | orchestrator | │ │ {"password"'+137 │ │ 2025-11-11 00:59:55.996000 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:55.996010 | orchestrator | │ │ conn = │ │ 2025-11-11 00:59:55.996032 | orchestrator | │ │ decode_content = False │ │ 2025-11-11 00:59:55.996042 | orchestrator | │ │ enforce_content_length = True │ │ 2025-11-11 00:59:55.996053 | orchestrator | │ │ headers = {'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:55.996063 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:55.996074 | orchestrator | │ │ CPython/3.13.3', 'Accept-Encoding': 'gzip, │ │ 2025-11-11 00:59:55.996084 | orchestrator | │ │ deflate', 'Accept': 'application/json', │ │ 2025-11-11 00:59:55.996098 | orchestrator | │ │ 'Connection': 'keep-alive', 'Content-Type': │ │ 2025-11-11 00:59:55.996117 | orchestrator | │ │ 'application/json', 'Content-Length': '217'} │ │ 2025-11-11 00:59:55.996135 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:55.996153 | orchestrator | │ │ new_e = NewConnectionError(': Failed to establish │ │ 2025-11-11 00:59:55.996183 | orchestrator | │ │ a new connection: [Errno 113] Host is │ │ 2025-11-11 00:59:55.996200 | orchestrator | │ │ unreachable') │ │ 2025-11-11 00:59:55.996219 | orchestrator | │ │ preload_content = False │ │ 2025-11-11 00:59:55.996237 | orchestrator | │ │ response_conn = │ │ 2025-11-11 00:59:55.996274 | orchestrator | │ │ retries = Retry(total=0, connect=None, read=False, │ │ 2025-11-11 00:59:55.996289 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:55.996301 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:55.996322 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.996332 | orchestrator | │ │ timeout_obj = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:55.996343 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:55.996354 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:55.996365 | orchestrator | │ │ 2025-11-11 00:59:55.996376 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connectionpool.py:1093 in │ 2025-11-11 00:59:55.996387 | orchestrator | │ _validate_conn │ 2025-11-11 00:59:55.996397 | orchestrator | │ │ 2025-11-11 00:59:55.996408 | orchestrator | │ 1090 │ │ │ 2025-11-11 00:59:55.996418 | orchestrator | │ 1091 │ │ # Force connect early to allow us to validate the connection. │ 2025-11-11 00:59:55.996444 | orchestrator | │ 1092 │ │ if conn.is_closed: │ 2025-11-11 00:59:56.020868 | orchestrator | │ ❱ 1093 │ │ │ conn.connect() │ 2025-11-11 00:59:56.020961 | orchestrator | │ 1094 │ │ │ 2025-11-11 00:59:56.020974 | orchestrator | │ 1095 │ │ # TODO revise this, see https://github.com/urllib3/urllib3/is │ 2025-11-11 00:59:56.020985 | orchestrator | │ 1096 │ │ if not conn.is_verified and not conn.proxy_is_verified: │ 2025-11-11 00:59:56.020996 | orchestrator | │ │ 2025-11-11 00:59:56.021009 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.021023 | orchestrator | │ │ conn = │ │ 2025-11-11 00:59:56.021034 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.021056 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.021070 | orchestrator | │ │ 2025-11-11 00:59:56.021080 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connection.py:753 in connect │ 2025-11-11 00:59:56.021091 | orchestrator | │ │ 2025-11-11 00:59:56.021101 | orchestrator | │ 750 │ │ │ 2025-11-11 00:59:56.021112 | orchestrator | │ 751 │ │ try: │ 2025-11-11 00:59:56.021122 | orchestrator | │ 752 │ │ │ sock: socket.socket | ssl.SSLSocket │ 2025-11-11 00:59:56.021133 | orchestrator | │ ❱ 753 │ │ │ self.sock = sock = self._new_conn() │ 2025-11-11 00:59:56.021144 | orchestrator | │ 754 │ │ │ server_hostname: str = self.host │ 2025-11-11 00:59:56.021154 | orchestrator | │ 755 │ │ │ tls_in_tls = False │ 2025-11-11 00:59:56.021165 | orchestrator | │ 756 │ 2025-11-11 00:59:56.021175 | orchestrator | │ │ 2025-11-11 00:59:56.021186 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.021214 | orchestrator | │ │ probe_http2_host = 'api.testbed.osism.xyz' │ │ 2025-11-11 00:59:56.021236 | orchestrator | │ │ probe_http2_port = 5000 │ │ 2025-11-11 00:59:56.021247 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.021268 | orchestrator | │ │ target_supports_http2 = False │ │ 2025-11-11 00:59:56.021279 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.021291 | orchestrator | │ │ 2025-11-11 00:59:56.021319 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connection.py:213 in │ 2025-11-11 00:59:56.021351 | orchestrator | │ _new_conn │ 2025-11-11 00:59:56.021362 | orchestrator | │ │ 2025-11-11 00:59:56.021373 | orchestrator | │ 210 │ │ │ ) from e │ 2025-11-11 00:59:56.021383 | orchestrator | │ 211 │ │ │ 2025-11-11 00:59:56.021394 | orchestrator | │ 212 │ │ except OSError as e: │ 2025-11-11 00:59:56.021404 | orchestrator | │ ❱ 213 │ │ │ raise NewConnectionError( │ 2025-11-11 00:59:56.021415 | orchestrator | │ 214 │ │ │ │ self, f"Failed to establish a new connection: {e}" │ 2025-11-11 00:59:56.021426 | orchestrator | │ 215 │ │ │ ) from e │ 2025-11-11 00:59:56.021436 | orchestrator | │ 216 │ 2025-11-11 00:59:56.021465 | orchestrator | │ │ 2025-11-11 00:59:56.021476 | orchestrator | │ ╭─────────────────────────────── locals ───────────────────────────────╮ │ 2025-11-11 00:59:56.021487 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.021498 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.021511 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-11-11 00:59:56.021523 | orchestrator | NewConnectionError: : Failed to establish a new connection: [Errno 113] Host is 2025-11-11 00:59:56.021568 | orchestrator | unreachable 2025-11-11 00:59:56.021579 | orchestrator | 2025-11-11 00:59:56.021591 | orchestrator | The above exception was the direct cause of the following exception: 2025-11-11 00:59:56.021602 | orchestrator | 2025-11-11 00:59:56.021614 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-11-11 00:59:56.021626 | orchestrator | │ /usr/local/lib/python3.13/site-packages/requests/adapters.py:644 in send │ 2025-11-11 00:59:56.021637 | orchestrator | │ │ 2025-11-11 00:59:56.021648 | orchestrator | │ 641 │ │ │ timeout = TimeoutSauce(connect=timeout, read=timeout) │ 2025-11-11 00:59:56.021659 | orchestrator | │ 642 │ │ │ 2025-11-11 00:59:56.021670 | orchestrator | │ 643 │ │ try: │ 2025-11-11 00:59:56.021680 | orchestrator | │ ❱ 644 │ │ │ resp = conn.urlopen( │ 2025-11-11 00:59:56.021691 | orchestrator | │ 645 │ │ │ │ method=request.method, │ 2025-11-11 00:59:56.021702 | orchestrator | │ 646 │ │ │ │ url=url, │ 2025-11-11 00:59:56.021713 | orchestrator | │ 647 │ │ │ │ body=request.body, │ 2025-11-11 00:59:56.021723 | orchestrator | │ │ 2025-11-11 00:59:56.021734 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.021753 | orchestrator | │ │ cert = None │ │ 2025-11-11 00:59:56.021764 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:56.021774 | orchestrator | │ │ conn = │ │ 2025-11-11 00:59:56.021801 | orchestrator | │ │ proxies = OrderedDict() │ │ 2025-11-11 00:59:56.021812 | orchestrator | │ │ request = │ │ 2025-11-11 00:59:56.021823 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.021844 | orchestrator | │ │ stream = False │ │ 2025-11-11 00:59:56.021867 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:56.021889 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:56.021900 | orchestrator | │ │ verify = '/etc/ssl/certs/ca-certificates.crt' │ │ 2025-11-11 00:59:56.021911 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.021923 | orchestrator | │ │ 2025-11-11 00:59:56.021933 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/connectionpool.py:841 in │ 2025-11-11 00:59:56.021944 | orchestrator | │ urlopen │ 2025-11-11 00:59:56.021955 | orchestrator | │ │ 2025-11-11 00:59:56.021966 | orchestrator | │ 838 │ │ │ elif isinstance(new_e, (OSError, HTTPException)): │ 2025-11-11 00:59:56.021983 | orchestrator | │ 839 │ │ │ │ new_e = ProtocolError("Connection aborted.", new_e) │ 2025-11-11 00:59:56.053995 | orchestrator | │ 840 │ │ │ │ 2025-11-11 00:59:56.054154 | orchestrator | │ ❱ 841 │ │ │ retries = retries.increment( │ 2025-11-11 00:59:56.054171 | orchestrator | │ 842 │ │ │ │ method, url, error=new_e, _pool=self, _stacktrace=sys │ 2025-11-11 00:59:56.054184 | orchestrator | │ 843 │ │ │ ) │ 2025-11-11 00:59:56.054195 | orchestrator | │ 844 │ │ │ retries.sleep() │ 2025-11-11 00:59:56.054206 | orchestrator | │ │ 2025-11-11 00:59:56.054220 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.054234 | orchestrator | │ │ assert_same_host = False │ │ 2025-11-11 00:59:56.054245 | orchestrator | │ │ body = '{"auth": {"identity": {"methods": ["password"], │ │ 2025-11-11 00:59:56.054256 | orchestrator | │ │ "password": {"user": {"password"'+137 │ │ 2025-11-11 00:59:56.054267 | orchestrator | │ │ body_pos = None │ │ 2025-11-11 00:59:56.054278 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:56.054288 | orchestrator | │ │ clean_exit = False │ │ 2025-11-11 00:59:56.054322 | orchestrator | │ │ conn = None │ │ 2025-11-11 00:59:56.054334 | orchestrator | │ │ decode_content = False │ │ 2025-11-11 00:59:56.054344 | orchestrator | │ │ destination_scheme = None │ │ 2025-11-11 00:59:56.054355 | orchestrator | │ │ err = None │ │ 2025-11-11 00:59:56.054366 | orchestrator | │ │ headers = {'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.054393 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.054404 | orchestrator | │ │ CPython/3.13.3', 'Accept-Encoding': 'gzip, │ │ 2025-11-11 00:59:56.054415 | orchestrator | │ │ deflate', 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.054429 | orchestrator | │ │ 'Connection': 'keep-alive', 'Content-Type': │ │ 2025-11-11 00:59:56.054441 | orchestrator | │ │ 'application/json', 'Content-Length': '217'} │ │ 2025-11-11 00:59:56.054454 | orchestrator | │ │ http_tunnel_required = False │ │ 2025-11-11 00:59:56.054466 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.054478 | orchestrator | │ │ new_e = NewConnectionError(': Failed to establish a │ │ 2025-11-11 00:59:56.054530 | orchestrator | │ │ new connection: [Errno 113] Host is unreachable') │ │ 2025-11-11 00:59:56.054566 | orchestrator | │ │ parsed_url = Url( │ │ 2025-11-11 00:59:56.054579 | orchestrator | │ │ │ scheme=None, │ │ 2025-11-11 00:59:56.054591 | orchestrator | │ │ │ auth=None, │ │ 2025-11-11 00:59:56.054603 | orchestrator | │ │ │ host=None, │ │ 2025-11-11 00:59:56.054615 | orchestrator | │ │ │ port=None, │ │ 2025-11-11 00:59:56.054626 | orchestrator | │ │ │ path='/v3/auth/tokens', │ │ 2025-11-11 00:59:56.054637 | orchestrator | │ │ │ query=None, │ │ 2025-11-11 00:59:56.054647 | orchestrator | │ │ │ fragment=None │ │ 2025-11-11 00:59:56.054658 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:56.054669 | orchestrator | │ │ pool_timeout = None │ │ 2025-11-11 00:59:56.054679 | orchestrator | │ │ preload_content = False │ │ 2025-11-11 00:59:56.054690 | orchestrator | │ │ redirect = False │ │ 2025-11-11 00:59:56.054701 | orchestrator | │ │ release_conn = False │ │ 2025-11-11 00:59:56.054712 | orchestrator | │ │ release_this_conn = True │ │ 2025-11-11 00:59:56.054722 | orchestrator | │ │ response_conn = │ │ 2025-11-11 00:59:56.054764 | orchestrator | │ │ response_kw = {} │ │ 2025-11-11 00:59:56.054775 | orchestrator | │ │ retries = Retry(total=0, connect=None, read=False, │ │ 2025-11-11 00:59:56.054787 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:56.054797 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.054828 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:56.054839 | orchestrator | │ │ timeout_obj = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:56.054850 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:56.054861 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.054874 | orchestrator | │ │ 2025-11-11 00:59:56.054885 | orchestrator | │ /usr/local/lib/python3.13/site-packages/urllib3/util/retry.py:519 in │ 2025-11-11 00:59:56.054896 | orchestrator | │ increment │ 2025-11-11 00:59:56.054906 | orchestrator | │ │ 2025-11-11 00:59:56.054917 | orchestrator | │ 516 │ │ │ 2025-11-11 00:59:56.054927 | orchestrator | │ 517 │ │ if new_retry.is_exhausted(): │ 2025-11-11 00:59:56.054938 | orchestrator | │ 518 │ │ │ reason = error or ResponseError(cause) │ 2025-11-11 00:59:56.054949 | orchestrator | │ ❱ 519 │ │ │ raise MaxRetryError(_pool, url, reason) from reason # typ │ 2025-11-11 00:59:56.054960 | orchestrator | │ 520 │ │ │ 2025-11-11 00:59:56.054970 | orchestrator | │ 521 │ │ log.debug("Incremented Retry for (url='%s'): %r", url, new_ret │ 2025-11-11 00:59:56.054981 | orchestrator | │ 522 │ 2025-11-11 00:59:56.054991 | orchestrator | │ │ 2025-11-11 00:59:56.055002 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.055013 | orchestrator | │ │ _pool = │ │ 2025-11-11 00:59:56.055035 | orchestrator | │ │ _stacktrace = │ │ 2025-11-11 00:59:56.055045 | orchestrator | │ │ cause = 'unknown' │ │ 2025-11-11 00:59:56.055056 | orchestrator | │ │ connect = None │ │ 2025-11-11 00:59:56.055066 | orchestrator | │ │ error = NewConnectionError(': Failed to establish a new │ │ 2025-11-11 00:59:56.055088 | orchestrator | │ │ connection: [Errno 113] Host is unreachable') │ │ 2025-11-11 00:59:56.055098 | orchestrator | │ │ history = ( │ │ 2025-11-11 00:59:56.055116 | orchestrator | │ │ │ RequestHistory( │ │ 2025-11-11 00:59:56.055127 | orchestrator | │ │ │ │ method='POST', │ │ 2025-11-11 00:59:56.055137 | orchestrator | │ │ │ │ url='/v3/auth/tokens', │ │ 2025-11-11 00:59:56.055149 | orchestrator | │ │ │ │ │ │ 2025-11-11 00:59:56.055159 | orchestrator | │ │ error=NewConnectionError(': Failed to establish a new │ │ 2025-11-11 00:59:56.055187 | orchestrator | │ │ connection: [Errno 113] Host is unreachable'), │ │ 2025-11-11 00:59:56.055198 | orchestrator | │ │ │ │ status=None, │ │ 2025-11-11 00:59:56.055208 | orchestrator | │ │ │ │ redirect_location=None │ │ 2025-11-11 00:59:56.055219 | orchestrator | │ │ │ ), │ │ 2025-11-11 00:59:56.055229 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:56.055240 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.055258 | orchestrator | │ │ new_retry = Retry(total=-1, connect=None, read=False, │ │ 2025-11-11 00:59:56.094417 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:56.094568 | orchestrator | │ │ other = None │ │ 2025-11-11 00:59:56.094584 | orchestrator | │ │ read = False │ │ 2025-11-11 00:59:56.094596 | orchestrator | │ │ reason = NewConnectionError(': Failed to establish a new │ │ 2025-11-11 00:59:56.094618 | orchestrator | │ │ connection: [Errno 113] Host is unreachable') │ │ 2025-11-11 00:59:56.094629 | orchestrator | │ │ redirect = None │ │ 2025-11-11 00:59:56.094640 | orchestrator | │ │ redirect_location = None │ │ 2025-11-11 00:59:56.094650 | orchestrator | │ │ response = None │ │ 2025-11-11 00:59:56.094661 | orchestrator | │ │ self = Retry(total=0, connect=None, read=False, │ │ 2025-11-11 00:59:56.094672 | orchestrator | │ │ redirect=None, status=None) │ │ 2025-11-11 00:59:56.094683 | orchestrator | │ │ status = None │ │ 2025-11-11 00:59:56.094693 | orchestrator | │ │ status_count = None │ │ 2025-11-11 00:59:56.094704 | orchestrator | │ │ total = -1 │ │ 2025-11-11 00:59:56.094715 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:56.094727 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.094744 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-11-11 00:59:56.094755 | orchestrator | MaxRetryError: HTTPSConnectionPool(host='api.testbed.osism.xyz', port=5000): Max 2025-11-11 00:59:56.094769 | orchestrator | retries exceeded with url: /v3/auth/tokens (Caused by 2025-11-11 00:59:56.094782 | orchestrator | NewConnectionError(': Failed to establish a new connection: [Errno 113] Host is 2025-11-11 00:59:56.094805 | orchestrator | unreachable')) 2025-11-11 00:59:56.094816 | orchestrator | 2025-11-11 00:59:56.094827 | orchestrator | During handling of the above exception, another exception occurred: 2025-11-11 00:59:56.094838 | orchestrator | 2025-11-11 00:59:56.094850 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-11-11 00:59:56.094863 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/session.py:1161 in │ 2025-11-11 00:59:56.094898 | orchestrator | │ _send_request │ 2025-11-11 00:59:56.094910 | orchestrator | │ │ 2025-11-11 00:59:56.094920 | orchestrator | │ 1158 │ │ try: │ 2025-11-11 00:59:56.094931 | orchestrator | │ 1159 │ │ │ try: │ 2025-11-11 00:59:56.094941 | orchestrator | │ 1160 │ │ │ │ with rate_semaphore: │ 2025-11-11 00:59:56.094952 | orchestrator | │ ❱ 1161 │ │ │ │ │ resp = self.session.request(method, url, **kwargs │ 2025-11-11 00:59:56.094962 | orchestrator | │ 1162 │ │ │ except requests.exceptions.SSLError as e: │ 2025-11-11 00:59:56.094973 | orchestrator | │ 1163 │ │ │ │ msg = f'SSL exception connecting to {url}: {e}' │ 2025-11-11 00:59:56.094984 | orchestrator | │ 1164 │ │ │ │ raise exceptions.SSLError(msg) │ 2025-11-11 00:59:56.094994 | orchestrator | │ │ 2025-11-11 00:59:56.095007 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.095018 | orchestrator | │ │ connect_retries = 0 │ │ 2025-11-11 00:59:56.095028 | orchestrator | │ │ connect_retry_delays = │ │ 2025-11-11 00:59:56.095049 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.095078 | orchestrator | │ │ │ 'headers': { │ │ 2025-11-11 00:59:56.095112 | orchestrator | │ │ │ │ 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.095124 | orchestrator | │ │ │ │ 'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.095135 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.095145 | orchestrator | │ │ CPython/3.13.3', │ │ 2025-11-11 00:59:56.095156 | orchestrator | │ │ │ │ 'Content-Type': 'application/json' │ │ 2025-11-11 00:59:56.095167 | orchestrator | │ │ │ }, │ │ 2025-11-11 00:59:56.095183 | orchestrator | │ │ │ 'data': '{"auth": {"identity": │ │ 2025-11-11 00:59:56.095194 | orchestrator | │ │ {"methods": ["password"], "password": │ │ 2025-11-11 00:59:56.095205 | orchestrator | │ │ {"user": {"password"'+137, │ │ 2025-11-11 00:59:56.095216 | orchestrator | │ │ │ 'verify': │ │ 2025-11-11 00:59:56.095226 | orchestrator | │ │ '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.095237 | orchestrator | │ │ │ 'allow_redirects': False │ │ 2025-11-11 00:59:56.095248 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.095259 | orchestrator | │ │ log = False │ │ 2025-11-11 00:59:56.095269 | orchestrator | │ │ logger = │ │ 2025-11-11 00:59:56.095280 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.095291 | orchestrator | │ │ msg = 'Unable to establish connection to │ │ 2025-11-11 00:59:56.095309 | orchestrator | │ │ https://api.testbed.osism.xyz:5000/v3/auth/t… │ │ 2025-11-11 00:59:56.095320 | orchestrator | │ │ rate_semaphore = │ │ 2025-11-11 00:59:56.095341 | orchestrator | │ │ redirect = 30 │ │ 2025-11-11 00:59:56.095352 | orchestrator | │ │ retriable_status_codes = [503] │ │ 2025-11-11 00:59:56.095362 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.095383 | orchestrator | │ │ split_loggers = None │ │ 2025-11-11 00:59:56.095394 | orchestrator | │ │ status_code_retries = 0 │ │ 2025-11-11 00:59:56.095404 | orchestrator | │ │ status_code_retry_delays = │ │ 2025-11-11 00:59:56.095426 | orchestrator | │ │ url = 'https://api.testbed.osism.xyz:5000/v3/auth/… │ │ 2025-11-11 00:59:56.095442 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.095454 | orchestrator | │ │ 2025-11-11 00:59:56.095465 | orchestrator | │ /usr/local/lib/python3.13/site-packages/requests/sessions.py:589 in request │ 2025-11-11 00:59:56.095476 | orchestrator | │ │ 2025-11-11 00:59:56.095486 | orchestrator | │ 586 │ │ │ "allow_redirects": allow_redirects, │ 2025-11-11 00:59:56.095497 | orchestrator | │ 587 │ │ } │ 2025-11-11 00:59:56.095508 | orchestrator | │ 588 │ │ send_kwargs.update(settings) │ 2025-11-11 00:59:56.095518 | orchestrator | │ ❱ 589 │ │ resp = self.send(prep, **send_kwargs) │ 2025-11-11 00:59:56.095529 | orchestrator | │ 590 │ │ │ 2025-11-11 00:59:56.095560 | orchestrator | │ 591 │ │ return resp │ 2025-11-11 00:59:56.095571 | orchestrator | │ 592 │ 2025-11-11 00:59:56.095582 | orchestrator | │ │ 2025-11-11 00:59:56.095593 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.095604 | orchestrator | │ │ allow_redirects = False │ │ 2025-11-11 00:59:56.095622 | orchestrator | │ │ auth = None │ │ 2025-11-11 00:59:56.127761 | orchestrator | │ │ cert = None │ │ 2025-11-11 00:59:56.127863 | orchestrator | │ │ cookies = None │ │ 2025-11-11 00:59:56.127875 | orchestrator | │ │ data = '{"auth": {"identity": {"methods": ["password"], │ │ 2025-11-11 00:59:56.127886 | orchestrator | │ │ "password": {"user": {"password"'+137 │ │ 2025-11-11 00:59:56.127898 | orchestrator | │ │ files = None │ │ 2025-11-11 00:59:56.127933 | orchestrator | │ │ headers = { │ │ 2025-11-11 00:59:56.127945 | orchestrator | │ │ │ 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.127956 | orchestrator | │ │ │ 'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.127967 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.127977 | orchestrator | │ │ CPython/3.13.3', │ │ 2025-11-11 00:59:56.127988 | orchestrator | │ │ │ 'Content-Type': 'application/json' │ │ 2025-11-11 00:59:56.127998 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.128009 | orchestrator | │ │ hooks = None │ │ 2025-11-11 00:59:56.128020 | orchestrator | │ │ json = None │ │ 2025-11-11 00:59:56.128030 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.128041 | orchestrator | │ │ params = None │ │ 2025-11-11 00:59:56.128076 | orchestrator | │ │ prep = │ │ 2025-11-11 00:59:56.128087 | orchestrator | │ │ proxies = {} │ │ 2025-11-11 00:59:56.128098 | orchestrator | │ │ req = │ │ 2025-11-11 00:59:56.128109 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.128120 | orchestrator | │ │ send_kwargs = { │ │ 2025-11-11 00:59:56.128131 | orchestrator | │ │ │ 'timeout': None, │ │ 2025-11-11 00:59:56.128142 | orchestrator | │ │ │ 'allow_redirects': False, │ │ 2025-11-11 00:59:56.128153 | orchestrator | │ │ │ 'proxies': OrderedDict(), │ │ 2025-11-11 00:59:56.128164 | orchestrator | │ │ │ 'stream': False, │ │ 2025-11-11 00:59:56.128175 | orchestrator | │ │ │ 'verify': '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.128187 | orchestrator | │ │ │ 'cert': None │ │ 2025-11-11 00:59:56.128200 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.128211 | orchestrator | │ │ settings = { │ │ 2025-11-11 00:59:56.128223 | orchestrator | │ │ │ 'proxies': OrderedDict(), │ │ 2025-11-11 00:59:56.128235 | orchestrator | │ │ │ 'stream': False, │ │ 2025-11-11 00:59:56.128247 | orchestrator | │ │ │ 'verify': '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.128259 | orchestrator | │ │ │ 'cert': None │ │ 2025-11-11 00:59:56.128272 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.128283 | orchestrator | │ │ stream = None │ │ 2025-11-11 00:59:56.128296 | orchestrator | │ │ timeout = None │ │ 2025-11-11 00:59:56.128307 | orchestrator | │ │ url = 'https://api.testbed.osism.xyz:5000/v3/auth/tokens' │ │ 2025-11-11 00:59:56.128320 | orchestrator | │ │ verify = '/etc/ssl/certs/ca-certificates.crt' │ │ 2025-11-11 00:59:56.128333 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.128356 | orchestrator | │ │ 2025-11-11 00:59:56.128368 | orchestrator | │ /usr/local/lib/python3.13/site-packages/requests/sessions.py:703 in send │ 2025-11-11 00:59:56.128379 | orchestrator | │ │ 2025-11-11 00:59:56.128390 | orchestrator | │ 700 │ │ start = preferred_clock() │ 2025-11-11 00:59:56.128401 | orchestrator | │ 701 │ │ │ 2025-11-11 00:59:56.128430 | orchestrator | │ 702 │ │ # Send the request │ 2025-11-11 00:59:56.128441 | orchestrator | │ ❱ 703 │ │ r = adapter.send(request, **kwargs) │ 2025-11-11 00:59:56.128452 | orchestrator | │ 704 │ │ │ 2025-11-11 00:59:56.128463 | orchestrator | │ 705 │ │ # Total elapsed time of the request (approximately) │ 2025-11-11 00:59:56.128474 | orchestrator | │ 706 │ │ elapsed = preferred_clock() - start │ 2025-11-11 00:59:56.128485 | orchestrator | │ │ 2025-11-11 00:59:56.128497 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.128511 | orchestrator | │ │ adapter = │ │ 2025-11-11 00:59:56.128574 | orchestrator | │ │ allow_redirects = False │ │ 2025-11-11 00:59:56.128587 | orchestrator | │ │ hooks = {'response': []} │ │ 2025-11-11 00:59:56.128597 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.128608 | orchestrator | │ │ │ 'timeout': None, │ │ 2025-11-11 00:59:56.128619 | orchestrator | │ │ │ 'proxies': OrderedDict(), │ │ 2025-11-11 00:59:56.128629 | orchestrator | │ │ │ 'stream': False, │ │ 2025-11-11 00:59:56.128640 | orchestrator | │ │ │ 'verify': '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.128651 | orchestrator | │ │ │ 'cert': None │ │ 2025-11-11 00:59:56.128661 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.128672 | orchestrator | │ │ request = │ │ 2025-11-11 00:59:56.128682 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.128693 | orchestrator | │ │ start = 1762822789.999306 │ │ 2025-11-11 00:59:56.128704 | orchestrator | │ │ stream = False │ │ 2025-11-11 00:59:56.128715 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.128726 | orchestrator | │ │ 2025-11-11 00:59:56.128737 | orchestrator | │ /usr/local/lib/python3.13/site-packages/requests/adapters.py:677 in send │ 2025-11-11 00:59:56.128748 | orchestrator | │ │ 2025-11-11 00:59:56.128758 | orchestrator | │ 674 │ │ │ │ # This branch is for urllib3 v1.22 and later. │ 2025-11-11 00:59:56.128777 | orchestrator | │ 675 │ │ │ │ raise SSLError(e, request=request) │ 2025-11-11 00:59:56.128788 | orchestrator | │ 676 │ │ │ │ 2025-11-11 00:59:56.128798 | orchestrator | │ ❱ 677 │ │ │ raise ConnectionError(e, request=request) │ 2025-11-11 00:59:56.128809 | orchestrator | │ 678 │ │ │ 2025-11-11 00:59:56.128820 | orchestrator | │ 679 │ │ except ClosedPoolError as e: │ 2025-11-11 00:59:56.128830 | orchestrator | │ 680 │ │ │ raise ConnectionError(e, request=request) │ 2025-11-11 00:59:56.128841 | orchestrator | │ │ 2025-11-11 00:59:56.128852 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.128863 | orchestrator | │ │ cert = None │ │ 2025-11-11 00:59:56.128873 | orchestrator | │ │ chunked = False │ │ 2025-11-11 00:59:56.128884 | orchestrator | │ │ conn = │ │ 2025-11-11 00:59:56.128912 | orchestrator | │ │ proxies = OrderedDict() │ │ 2025-11-11 00:59:56.128923 | orchestrator | │ │ request = │ │ 2025-11-11 00:59:56.128941 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.160272 | orchestrator | │ │ stream = False │ │ 2025-11-11 00:59:56.160284 | orchestrator | │ │ timeout = Timeout(connect=None, read=None, total=None) │ │ 2025-11-11 00:59:56.160295 | orchestrator | │ │ url = '/v3/auth/tokens' │ │ 2025-11-11 00:59:56.160320 | orchestrator | │ │ verify = '/etc/ssl/certs/ca-certificates.crt' │ │ 2025-11-11 00:59:56.160332 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.160347 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-11-11 00:59:56.160359 | orchestrator | ConnectionError: HTTPSConnectionPool(host='api.testbed.osism.xyz', port=5000): 2025-11-11 00:59:56.160371 | orchestrator | Max retries exceeded with url: /v3/auth/tokens (Caused by 2025-11-11 00:59:56.160383 | orchestrator | NewConnectionError(': Failed to establish a new connection: [Errno 113] Host is 2025-11-11 00:59:56.160405 | orchestrator | unreachable')) 2025-11-11 00:59:56.160417 | orchestrator | 2025-11-11 00:59:56.160428 | orchestrator | During handling of the above exception, another exception occurred: 2025-11-11 00:59:56.160438 | orchestrator | 2025-11-11 00:59:56.160451 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-11-11 00:59:56.160463 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:205 │ 2025-11-11 00:59:56.160474 | orchestrator | │ in run │ 2025-11-11 00:59:56.160500 | orchestrator | │ │ 2025-11-11 00:59:56.160511 | orchestrator | │ 202 │ │ 2025-11-11 00:59:56.160522 | orchestrator | │ 203 │ definitions = get_flavor_definitions(name, url) │ 2025-11-11 00:59:56.160580 | orchestrator | │ 204 │ manager = FlavorManager( │ 2025-11-11 00:59:56.160592 | orchestrator | │ ❱ 205 │ │ cloud=Cloud(cloud), │ 2025-11-11 00:59:56.160603 | orchestrator | │ 206 │ │ definitions=definitions, │ 2025-11-11 00:59:56.160613 | orchestrator | │ 207 │ │ recommended=recommended, │ 2025-11-11 00:59:56.160624 | orchestrator | │ 208 │ │ limit_memory=limit_memory, │ 2025-11-11 00:59:56.160635 | orchestrator | │ │ 2025-11-11 00:59:56.160649 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.160662 | orchestrator | │ │ cloud = 'admin' │ │ 2025-11-11 00:59:56.160674 | orchestrator | │ │ debug = False │ │ 2025-11-11 00:59:56.160686 | orchestrator | │ │ definitions = { │ │ 2025-11-11 00:59:56.160698 | orchestrator | │ │ │ 'reference': [ │ │ 2025-11-11 00:59:56.160710 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-11-11 00:59:56.160722 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-11-11 00:59:56.160734 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-11-11 00:59:56.160747 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-11-11 00:59:56.160758 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-11-11 00:59:56.160771 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-11-11 00:59:56.160784 | orchestrator | │ │ │ ], │ │ 2025-11-11 00:59:56.160808 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-11-11 00:59:56.160821 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.160833 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-11-11 00:59:56.160845 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.160874 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-11-11 00:59:56.160887 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-11-11 00:59:56.160900 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-11-11 00:59:56.160912 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.160924 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-11-11 00:59:56.160937 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-11-11 00:59:56.160950 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.160962 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.160982 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.160995 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-11-11 00:59:56.161008 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.161019 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-11-11 00:59:56.161030 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-11-11 00:59:56.161040 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-11-11 00:59:56.161051 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.161062 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-11-11 00:59:56.161072 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-11-11 00:59:56.161083 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.161094 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.161104 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.161115 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-11-11 00:59:56.161126 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.161136 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-11-11 00:59:56.161147 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-11-11 00:59:56.161158 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.161168 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.161179 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-11-11 00:59:56.161190 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-11-11 00:59:56.161201 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.161211 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.161222 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.161241 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-11-11 00:59:56.161252 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.161262 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-11-11 00:59:56.161273 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-11-11 00:59:56.161284 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.161294 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.161305 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-11-11 00:59:56.161316 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-11-11 00:59:56.161326 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.161337 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.161355 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.161365 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-11-11 00:59:56.161376 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.161392 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-11-11 00:59:56.192156 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-11-11 00:59:56.192256 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192272 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192298 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-11-11 00:59:56.192310 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-11-11 00:59:56.192320 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.192331 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.192342 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.192353 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-11-11 00:59:56.192364 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.192375 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-11-11 00:59:56.192386 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-11-11 00:59:56.192396 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192407 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192418 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-11-11 00:59:56.192428 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-11-11 00:59:56.192439 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.192450 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.192460 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.192471 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-11-11 00:59:56.192482 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.192493 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-11-11 00:59:56.192504 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-11-11 00:59:56.192515 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192525 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192578 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-11-11 00:59:56.192591 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-11-11 00:59:56.192603 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.192614 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.192626 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.192656 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-11-11 00:59:56.192669 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-11-11 00:59:56.192681 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-11-11 00:59:56.192693 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-11-11 00:59:56.192706 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192718 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192730 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-11-11 00:59:56.192741 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-11-11 00:59:56.192753 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.192765 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.192777 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.192790 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-11-11 00:59:56.192802 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-11-11 00:59:56.192830 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-11-11 00:59:56.192842 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-11-11 00:59:56.192852 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192863 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192874 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-11-11 00:59:56.192885 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-11-11 00:59:56.192895 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.192906 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.192917 | orchestrator | │ │ │ │ { │ │ 2025-11-11 00:59:56.192927 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-11-11 00:59:56.192938 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-11-11 00:59:56.192949 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-11-11 00:59:56.192959 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-11-11 00:59:56.192969 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-11-11 00:59:56.192980 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-11-11 00:59:56.192991 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-11-11 00:59:56.193001 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-11-11 00:59:56.193012 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-11-11 00:59:56.193022 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.193032 | orchestrator | │ │ │ │ ... +19 │ │ 2025-11-11 00:59:56.193049 | orchestrator | │ │ │ ] │ │ 2025-11-11 00:59:56.193065 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.193085 | orchestrator | │ │ level = 'INFO' │ │ 2025-11-11 00:59:56.193114 | orchestrator | │ │ limit_memory = 32 │ │ 2025-11-11 00:59:56.193134 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-11-11 00:59:56.193153 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-11-11 00:59:56.193171 | orchestrator | │ │ name = 'local' │ │ 2025-11-11 00:59:56.193190 | orchestrator | │ │ recommended = False │ │ 2025-11-11 00:59:56.193209 | orchestrator | │ │ url = None │ │ 2025-11-11 00:59:56.193243 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.193269 | orchestrator | │ │ 2025-11-11 00:59:56.193290 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:37 │ 2025-11-11 00:59:56.193308 | orchestrator | │ in __init__ │ 2025-11-11 00:59:56.193319 | orchestrator | │ │ 2025-11-11 00:59:56.193329 | orchestrator | │ 34 class Cloud: │ 2025-11-11 00:59:56.193340 | orchestrator | │ 35 │ def __init__(self, cloud: str) -> None: │ 2025-11-11 00:59:56.193350 | orchestrator | │ 36 │ │ self.conn = openstack.connect(cloud=cloud) │ 2025-11-11 00:59:56.193361 | orchestrator | │ ❱ 37 │ │ flavors = self.conn.list_flavors() │ 2025-11-11 00:59:56.193371 | orchestrator | │ 38 │ │ self.existing_flavors = {} │ 2025-11-11 00:59:56.193382 | orchestrator | │ 39 │ │ for flavor in flavors: │ 2025-11-11 00:59:56.193392 | orchestrator | │ 40 │ │ │ self.existing_flavors[flavor.name] = flavor │ 2025-11-11 00:59:56.193403 | orchestrator | │ │ 2025-11-11 00:59:56.193429 | orchestrator | │ ╭──────────────────────────────── locals ────────────────────────────────╮ │ 2025-11-11 00:59:56.211451 | orchestrator | │ │ cloud = 'admin' │ │ 2025-11-11 00:59:56.211589 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.211619 | orchestrator | │ ╰────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.211634 | orchestrator | │ │ 2025-11-11 00:59:56.211645 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/cloud/_compute.py:249 in │ 2025-11-11 00:59:56.211655 | orchestrator | │ list_flavors │ 2025-11-11 00:59:56.211666 | orchestrator | │ │ 2025-11-11 00:59:56.211677 | orchestrator | │ 246 │ │ :returns: A list of compute ``Flavor`` objects. │ 2025-11-11 00:59:56.211706 | orchestrator | │ 247 │ │ """ │ 2025-11-11 00:59:56.211717 | orchestrator | │ 248 │ │ return list( │ 2025-11-11 00:59:56.211728 | orchestrator | │ ❱ 249 │ │ │ self.compute.flavors(details=True, get_extra_specs=get_ex │ 2025-11-11 00:59:56.211739 | orchestrator | │ 250 │ │ ) │ 2025-11-11 00:59:56.211749 | orchestrator | │ 251 │ │ 2025-11-11 00:59:56.211760 | orchestrator | │ 252 │ def list_server_security_groups(self, server): │ 2025-11-11 00:59:56.211771 | orchestrator | │ │ 2025-11-11 00:59:56.211781 | orchestrator | │ ╭──────────────────────────────── locals ────────────────────────────────╮ │ 2025-11-11 00:59:56.211795 | orchestrator | │ │ get_extra = False │ │ 2025-11-11 00:59:56.211806 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.211816 | orchestrator | │ ╰────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.211827 | orchestrator | │ │ 2025-11-11 00:59:56.211838 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:91 │ 2025-11-11 00:59:56.211848 | orchestrator | │ in __get__ │ 2025-11-11 00:59:56.211859 | orchestrator | │ │ 2025-11-11 00:59:56.211870 | orchestrator | │ 88 │ │ if instance is None: │ 2025-11-11 00:59:56.211880 | orchestrator | │ 89 │ │ │ return self │ 2025-11-11 00:59:56.211893 | orchestrator | │ 90 │ │ if self.service_type not in instance._proxies: │ 2025-11-11 00:59:56.211905 | orchestrator | │ ❱ 91 │ │ │ proxy = self._make_proxy(instance) │ 2025-11-11 00:59:56.211916 | orchestrator | │ 92 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2025-11-11 00:59:56.211929 | orchestrator | │ 93 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2025-11-11 00:59:56.211941 | orchestrator | │ 94 │ │ │ │ # that is about managing keystone endpoints. This is │ 2025-11-11 00:59:56.211952 | orchestrator | │ │ 2025-11-11 00:59:56.211965 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.211977 | orchestrator | │ │ instance = │ │ 2025-11-11 00:59:56.211989 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.212013 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.212025 | orchestrator | │ │ 2025-11-11 00:59:56.212036 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:289 │ 2025-11-11 00:59:56.212079 | orchestrator | │ in _make_proxy │ 2025-11-11 00:59:56.212093 | orchestrator | │ │ 2025-11-11 00:59:56.212105 | orchestrator | │ 286 │ │ temp_adapter = config.get_session_client( │ 2025-11-11 00:59:56.212117 | orchestrator | │ 287 │ │ │ self.service_type, allow_version_hack=True, **version_kwar │ 2025-11-11 00:59:56.212129 | orchestrator | │ 288 │ │ ) │ 2025-11-11 00:59:56.212141 | orchestrator | │ ❱ 289 │ │ found_version = temp_adapter.get_api_major_version() │ 2025-11-11 00:59:56.212152 | orchestrator | │ 290 │ │ if found_version is None: │ 2025-11-11 00:59:56.212164 | orchestrator | │ 291 │ │ │ region_name = instance.config.get_region_name(self.service │ 2025-11-11 00:59:56.212176 | orchestrator | │ 292 │ │ │ if version_kwargs: │ 2025-11-11 00:59:56.212188 | orchestrator | │ │ 2025-11-11 00:59:56.212199 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.212211 | orchestrator | │ │ config = │ │ 2025-11-11 00:59:56.212235 | orchestrator | │ │ endpoint_override = None │ │ 2025-11-11 00:59:56.212247 | orchestrator | │ │ instance = │ │ 2025-11-11 00:59:56.212268 | orchestrator | │ │ proxy_obj = None │ │ 2025-11-11 00:59:56.212278 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.212299 | orchestrator | │ │ supported_versions = [2] │ │ 2025-11-11 00:59:56.212310 | orchestrator | │ │ temp_adapter = │ │ 2025-11-11 00:59:56.212320 | orchestrator | │ │ version_kwargs = {'version': '2'} │ │ 2025-11-11 00:59:56.212331 | orchestrator | │ │ version_string = '2' │ │ 2025-11-11 00:59:56.212341 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.212352 | orchestrator | │ │ 2025-11-11 00:59:56.212363 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/adapter.py:404 in │ 2025-11-11 00:59:56.212374 | orchestrator | │ get_api_major_version │ 2025-11-11 00:59:56.212385 | orchestrator | │ │ 2025-11-11 00:59:56.212395 | orchestrator | │ 401 │ │ if self.endpoint_override: │ 2025-11-11 00:59:56.212406 | orchestrator | │ 402 │ │ │ kwargs['endpoint_override'] = self.endpoint_override │ 2025-11-11 00:59:56.212416 | orchestrator | │ 403 │ │ │ 2025-11-11 00:59:56.212427 | orchestrator | │ ❱ 404 │ │ return self.session.get_api_major_version(auth or self.auth, * │ 2025-11-11 00:59:56.212437 | orchestrator | │ 405 │ │ 2025-11-11 00:59:56.212455 | orchestrator | │ 406 │ def invalidate( │ 2025-11-11 00:59:56.212465 | orchestrator | │ 407 │ │ self, auth: ty.Optional['plugin.BaseAuthPlugin'] = None │ 2025-11-11 00:59:56.212476 | orchestrator | │ │ 2025-11-11 00:59:56.212487 | orchestrator | │ ╭───────────────────────── locals ──────────────────────────╮ │ 2025-11-11 00:59:56.212498 | orchestrator | │ │ auth = None │ │ 2025-11-11 00:59:56.212508 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.212526 | orchestrator | │ │ │ 'service_type': 'compute', │ │ 2025-11-11 00:59:56.212556 | orchestrator | │ │ │ 'interface': 'public', │ │ 2025-11-11 00:59:56.212568 | orchestrator | │ │ │ 'version': '2', │ │ 2025-11-11 00:59:56.212584 | orchestrator | │ │ │ 'allow_version_hack': True │ │ 2025-11-11 00:59:56.231833 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.231945 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.231972 | orchestrator | │ ╰───────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.231983 | orchestrator | │ │ 2025-11-11 00:59:56.231993 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/session.py:1470 in │ 2025-11-11 00:59:56.232003 | orchestrator | │ get_api_major_version │ 2025-11-11 00:59:56.232012 | orchestrator | │ │ 2025-11-11 00:59:56.232022 | orchestrator | │ 1467 │ │ :rtype: tuple or None │ 2025-11-11 00:59:56.232032 | orchestrator | │ 1468 │ │ """ │ 2025-11-11 00:59:56.232042 | orchestrator | │ 1469 │ │ auth = self._auth_required(auth, 'determine endpoint URL') │ 2025-11-11 00:59:56.232051 | orchestrator | │ ❱ 1470 │ │ return auth.get_api_major_version(self, **kwargs) │ 2025-11-11 00:59:56.232061 | orchestrator | │ 1471 │ │ 2025-11-11 00:59:56.232070 | orchestrator | │ 1472 │ def get_all_version_data( │ 2025-11-11 00:59:56.232080 | orchestrator | │ 1473 │ │ self, │ 2025-11-11 00:59:56.232090 | orchestrator | │ │ 2025-11-11 00:59:56.232100 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.232112 | orchestrator | │ │ auth = │ │ 2025-11-11 00:59:56.232131 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.232141 | orchestrator | │ │ │ 'service_type': 'compute', │ │ 2025-11-11 00:59:56.232151 | orchestrator | │ │ │ 'interface': 'public', │ │ 2025-11-11 00:59:56.232160 | orchestrator | │ │ │ 'version': '2', │ │ 2025-11-11 00:59:56.232185 | orchestrator | │ │ │ 'allow_version_hack': True │ │ 2025-11-11 00:59:56.232199 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.232216 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.232238 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.232264 | orchestrator | │ │ 2025-11-11 00:59:56.232281 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/identity/base.py:573 │ 2025-11-11 00:59:56.232296 | orchestrator | │ in get_api_major_version │ 2025-11-11 00:59:56.232313 | orchestrator | │ │ 2025-11-11 00:59:56.232330 | orchestrator | │ 570 │ │ │ allow_version_hack=allow_version_hack, │ 2025-11-11 00:59:56.232347 | orchestrator | │ 571 │ │ │ **kwargs, │ 2025-11-11 00:59:56.232365 | orchestrator | │ 572 │ │ ) │ 2025-11-11 00:59:56.232383 | orchestrator | │ ❱ 573 │ │ data = get_endpoint_data(discover_versions=discover_versions) │ 2025-11-11 00:59:56.232401 | orchestrator | │ 574 │ │ if (not data or not data.api_version) and not discover_version │ 2025-11-11 00:59:56.232419 | orchestrator | │ 575 │ │ │ # It's possible that no version was requested and the endp │ 2025-11-11 00:59:56.232431 | orchestrator | │ 576 │ │ │ # in the catalog has no version in the URL. A version has │ 2025-11-11 00:59:56.232442 | orchestrator | │ │ 2025-11-11 00:59:56.232453 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.232464 | orchestrator | │ │ allow = {} │ │ 2025-11-11 00:59:56.232492 | orchestrator | │ │ allow_version_hack = True │ │ 2025-11-11 00:59:56.232502 | orchestrator | │ │ discover_versions = False │ │ 2025-11-11 00:59:56.232518 | orchestrator | │ │ endpoint_override = None │ │ 2025-11-11 00:59:56.232528 | orchestrator | │ │ get_endpoint_data = functools.partial(>, │ │ 2025-11-11 00:59:56.232598 | orchestrator | │ │ , endpoint_override=None, │ │ 2025-11-11 00:59:56.232617 | orchestrator | │ │ service_type='compute', interface='public', │ │ 2025-11-11 00:59:56.232626 | orchestrator | │ │ region_name=None, service_name=None, allow={}, │ │ 2025-11-11 00:59:56.232635 | orchestrator | │ │ min_version=(2, 0), max_version=(2, inf), │ │ 2025-11-11 00:59:56.232645 | orchestrator | │ │ skip_discovery=False, allow_version_hack=True) │ │ 2025-11-11 00:59:56.232654 | orchestrator | │ │ interface = 'public' │ │ 2025-11-11 00:59:56.232673 | orchestrator | │ │ kwargs = {} │ │ 2025-11-11 00:59:56.232683 | orchestrator | │ │ max_version = (2, inf) │ │ 2025-11-11 00:59:56.232692 | orchestrator | │ │ min_version = (2, 0) │ │ 2025-11-11 00:59:56.232702 | orchestrator | │ │ region_name = None │ │ 2025-11-11 00:59:56.232711 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.232730 | orchestrator | │ │ service_name = None │ │ 2025-11-11 00:59:56.232740 | orchestrator | │ │ service_type = 'compute' │ │ 2025-11-11 00:59:56.232749 | orchestrator | │ │ session = │ │ 2025-11-11 00:59:56.232768 | orchestrator | │ │ skip_discovery = False │ │ 2025-11-11 00:59:56.232777 | orchestrator | │ │ version = '2' │ │ 2025-11-11 00:59:56.232788 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.232798 | orchestrator | │ │ 2025-11-11 00:59:56.232807 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/identity/base.py:296 │ 2025-11-11 00:59:56.232817 | orchestrator | │ in get_endpoint_data │ 2025-11-11 00:59:56.232826 | orchestrator | │ │ 2025-11-11 00:59:56.232835 | orchestrator | │ 293 │ │ │ if not interface: │ 2025-11-11 00:59:56.232845 | orchestrator | │ 294 │ │ │ │ interface = 'public' │ 2025-11-11 00:59:56.232854 | orchestrator | │ 295 │ │ │ │ 2025-11-11 00:59:56.232864 | orchestrator | │ ❱ 296 │ │ │ service_catalog = self.get_access(session).service_catalog │ 2025-11-11 00:59:56.232873 | orchestrator | │ 297 │ │ │ project_id = self.get_project_id(session) │ 2025-11-11 00:59:56.232883 | orchestrator | │ 298 │ │ │ # NOTE(mordred): service_catalog.url_data_for raises if it │ 2025-11-11 00:59:56.232892 | orchestrator | │ 299 │ │ │ # find a match, so this will always be a valid object. │ 2025-11-11 00:59:56.232901 | orchestrator | │ │ 2025-11-11 00:59:56.232911 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.232921 | orchestrator | │ │ allow = {} │ │ 2025-11-11 00:59:56.232930 | orchestrator | │ │ allow_version_hack = True │ │ 2025-11-11 00:59:56.232940 | orchestrator | │ │ discover_versions = False │ │ 2025-11-11 00:59:56.232955 | orchestrator | │ │ endpoint_override = None │ │ 2025-11-11 00:59:56.255701 | orchestrator | │ │ interface = 'public' │ │ 2025-11-11 00:59:56.255804 | orchestrator | │ │ kwargs = {} │ │ 2025-11-11 00:59:56.255845 | orchestrator | │ │ max_version = (2, inf) │ │ 2025-11-11 00:59:56.255857 | orchestrator | │ │ min_version = (2, 0) │ │ 2025-11-11 00:59:56.255868 | orchestrator | │ │ region_name = None │ │ 2025-11-11 00:59:56.255879 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.255900 | orchestrator | │ │ service_name = None │ │ 2025-11-11 00:59:56.255911 | orchestrator | │ │ service_type = 'compute' │ │ 2025-11-11 00:59:56.255922 | orchestrator | │ │ session = │ │ 2025-11-11 00:59:56.255943 | orchestrator | │ │ skip_discovery = False │ │ 2025-11-11 00:59:56.255968 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.255982 | orchestrator | │ │ 2025-11-11 00:59:56.255993 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/identity/base.py:139 │ 2025-11-11 00:59:56.256003 | orchestrator | │ in get_access │ 2025-11-11 00:59:56.256014 | orchestrator | │ │ 2025-11-11 00:59:56.256025 | orchestrator | │ 136 │ │ # only one thread tries to actually fetch from keystone at onc │ 2025-11-11 00:59:56.256050 | orchestrator | │ 137 │ │ with self._lock: │ 2025-11-11 00:59:56.256061 | orchestrator | │ 138 │ │ │ if self._needs_reauthenticate(): │ 2025-11-11 00:59:56.256072 | orchestrator | │ ❱ 139 │ │ │ │ self.auth_ref = self.get_auth_ref(session) │ 2025-11-11 00:59:56.256083 | orchestrator | │ 140 │ │ │ 2025-11-11 00:59:56.256094 | orchestrator | │ 141 │ │ # narrow type │ 2025-11-11 00:59:56.256104 | orchestrator | │ 142 │ │ assert self.auth_ref is not None # nosec B101 │ 2025-11-11 00:59:56.256115 | orchestrator | │ │ 2025-11-11 00:59:56.256127 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.256140 | orchestrator | │ │ kwargs = {} │ │ 2025-11-11 00:59:56.256150 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.256172 | orchestrator | │ │ session = │ │ 2025-11-11 00:59:56.256183 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.256194 | orchestrator | │ │ 2025-11-11 00:59:56.256204 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/identity/generic/base. │ 2025-11-11 00:59:56.256215 | orchestrator | │ py:223 in get_auth_ref │ 2025-11-11 00:59:56.256286 | orchestrator | │ │ 2025-11-11 00:59:56.256300 | orchestrator | │ 220 │ │ │ plugin = self._do_create_plugin(session) │ 2025-11-11 00:59:56.256312 | orchestrator | │ 221 │ │ │ self._plugin = plugin │ 2025-11-11 00:59:56.256324 | orchestrator | │ 222 │ │ │ 2025-11-11 00:59:56.256337 | orchestrator | │ ❱ 223 │ │ return plugin.get_auth_ref(session) │ 2025-11-11 00:59:56.256349 | orchestrator | │ 224 │ │ 2025-11-11 00:59:56.256362 | orchestrator | │ 225 │ @abc.abstractmethod │ 2025-11-11 00:59:56.256391 | orchestrator | │ 226 │ def get_cache_id_elements(self) -> dict[str, str | None]: │ 2025-11-11 00:59:56.256404 | orchestrator | │ │ 2025-11-11 00:59:56.256423 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.256436 | orchestrator | │ │ plugin = │ │ 2025-11-11 00:59:56.256460 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.256482 | orchestrator | │ │ session = │ │ 2025-11-11 00:59:56.256493 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.256504 | orchestrator | │ │ 2025-11-11 00:59:56.256514 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/identity/v3/base.py:24 │ 2025-11-11 00:59:56.256525 | orchestrator | │ 0 in get_auth_ref │ 2025-11-11 00:59:56.256558 | orchestrator | │ │ 2025-11-11 00:59:56.256568 | orchestrator | │ 237 │ │ │ token_url += '?nocatalog' │ 2025-11-11 00:59:56.256579 | orchestrator | │ 238 │ │ │ 2025-11-11 00:59:56.256590 | orchestrator | │ 239 │ │ _logger.debug('Making authentication request to %s', token_url │ 2025-11-11 00:59:56.256600 | orchestrator | │ ❱ 240 │ │ resp = session.post( │ 2025-11-11 00:59:56.256623 | orchestrator | │ 241 │ │ │ token_url, │ 2025-11-11 00:59:56.256634 | orchestrator | │ 242 │ │ │ json=body, │ 2025-11-11 00:59:56.256645 | orchestrator | │ 243 │ │ │ headers=headers, │ 2025-11-11 00:59:56.256665 | orchestrator | │ │ 2025-11-11 00:59:56.256676 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.256687 | orchestrator | │ │ auth_data = { │ │ 2025-11-11 00:59:56.256697 | orchestrator | │ │ │ 'user': { │ │ 2025-11-11 00:59:56.256715 | orchestrator | │ │ │ │ 'password': 'password', │ │ 2025-11-11 00:59:56.256727 | orchestrator | │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.256738 | orchestrator | │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.256748 | orchestrator | │ │ │ } │ │ 2025-11-11 00:59:56.256759 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.256770 | orchestrator | │ │ body = { │ │ 2025-11-11 00:59:56.256785 | orchestrator | │ │ │ 'auth': { │ │ 2025-11-11 00:59:56.256796 | orchestrator | │ │ │ │ 'identity': { │ │ 2025-11-11 00:59:56.256807 | orchestrator | │ │ │ │ │ 'methods': ['password'], │ │ 2025-11-11 00:59:56.256817 | orchestrator | │ │ │ │ │ 'password': { │ │ 2025-11-11 00:59:56.256829 | orchestrator | │ │ │ │ │ │ 'user': { │ │ 2025-11-11 00:59:56.256840 | orchestrator | │ │ │ │ │ │ │ 'password': 'password', │ │ 2025-11-11 00:59:56.256851 | orchestrator | │ │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.256862 | orchestrator | │ │ │ │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.256873 | orchestrator | │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.256883 | orchestrator | │ │ │ │ │ } │ │ 2025-11-11 00:59:56.256894 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.256911 | orchestrator | │ │ │ │ 'scope': { │ │ 2025-11-11 00:59:56.292832 | orchestrator | │ │ │ │ │ 'project': { │ │ 2025-11-11 00:59:56.292929 | orchestrator | │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.292944 | orchestrator | │ │ │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.292955 | orchestrator | │ │ │ │ │ } │ │ 2025-11-11 00:59:56.292966 | orchestrator | │ │ │ │ } │ │ 2025-11-11 00:59:56.292977 | orchestrator | │ │ │ } │ │ 2025-11-11 00:59:56.292989 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.292999 | orchestrator | │ │ headers = {'Accept': 'application/json'} │ │ 2025-11-11 00:59:56.293010 | orchestrator | │ │ ident = { │ │ 2025-11-11 00:59:56.293021 | orchestrator | │ │ │ 'methods': ['password'], │ │ 2025-11-11 00:59:56.293032 | orchestrator | │ │ │ 'password': { │ │ 2025-11-11 00:59:56.293043 | orchestrator | │ │ │ │ 'user': { │ │ 2025-11-11 00:59:56.293054 | orchestrator | │ │ │ │ │ 'password': 'password', │ │ 2025-11-11 00:59:56.293064 | orchestrator | │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.293075 | orchestrator | │ │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.293105 | orchestrator | │ │ │ │ } │ │ 2025-11-11 00:59:56.293116 | orchestrator | │ │ │ } │ │ 2025-11-11 00:59:56.293127 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.293138 | orchestrator | │ │ method = │ │ 2025-11-11 00:59:56.293159 | orchestrator | │ │ mutual_exclusion = [False, True, False, False, False] │ │ 2025-11-11 00:59:56.293170 | orchestrator | │ │ name = 'password' │ │ 2025-11-11 00:59:56.293181 | orchestrator | │ │ rkwargs = {} │ │ 2025-11-11 00:59:56.293191 | orchestrator | │ │ scope = { │ │ 2025-11-11 00:59:56.293202 | orchestrator | │ │ │ 'project': { │ │ 2025-11-11 00:59:56.293213 | orchestrator | │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.293223 | orchestrator | │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.293234 | orchestrator | │ │ │ } │ │ 2025-11-11 00:59:56.293245 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.293256 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.293277 | orchestrator | │ │ session = │ │ 2025-11-11 00:59:56.293299 | orchestrator | │ │ token_url = 'https://api.testbed.osism.xyz:5000/v3/auth/tokens' │ │ 2025-11-11 00:59:56.293311 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.293325 | orchestrator | │ │ 2025-11-11 00:59:56.293337 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/session.py:1326 in │ 2025-11-11 00:59:56.293360 | orchestrator | │ post │ 2025-11-11 00:59:56.293371 | orchestrator | │ │ 2025-11-11 00:59:56.293382 | orchestrator | │ 1323 │ │ │ 2025-11-11 00:59:56.293392 | orchestrator | │ 1324 │ │ This calls :py:meth:`.request()` with ``method`` set to ``POS │ 2025-11-11 00:59:56.293403 | orchestrator | │ 1325 │ │ """ │ 2025-11-11 00:59:56.293414 | orchestrator | │ ❱ 1326 │ │ return self.request(url, 'POST', **kwargs) │ 2025-11-11 00:59:56.293443 | orchestrator | │ 1327 │ │ 2025-11-11 00:59:56.293455 | orchestrator | │ 1328 │ def put(self, url: str, **kwargs: ty.Any) -> requests.Response: │ 2025-11-11 00:59:56.293465 | orchestrator | │ 1329 │ │ """Perform a PUT request. │ 2025-11-11 00:59:56.293476 | orchestrator | │ │ 2025-11-11 00:59:56.293488 | orchestrator | │ ╭───────────────────────────── locals ──────────────────────────────╮ │ 2025-11-11 00:59:56.293506 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.293517 | orchestrator | │ │ │ 'json': { │ │ 2025-11-11 00:59:56.293528 | orchestrator | │ │ │ │ 'auth': { │ │ 2025-11-11 00:59:56.293559 | orchestrator | │ │ │ │ │ 'identity': { │ │ 2025-11-11 00:59:56.293570 | orchestrator | │ │ │ │ │ │ 'methods': ['password'], │ │ 2025-11-11 00:59:56.293580 | orchestrator | │ │ │ │ │ │ 'password': { │ │ 2025-11-11 00:59:56.293606 | orchestrator | │ │ │ │ │ │ │ 'user': { │ │ 2025-11-11 00:59:56.293618 | orchestrator | │ │ │ │ │ │ │ │ 'password': 'password', │ │ 2025-11-11 00:59:56.293629 | orchestrator | │ │ │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.293639 | orchestrator | │ │ │ │ │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.293650 | orchestrator | │ │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.293660 | orchestrator | │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.293671 | orchestrator | │ │ │ │ │ }, │ │ 2025-11-11 00:59:56.293682 | orchestrator | │ │ │ │ │ 'scope': { │ │ 2025-11-11 00:59:56.293692 | orchestrator | │ │ │ │ │ │ 'project': { │ │ 2025-11-11 00:59:56.293703 | orchestrator | │ │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.293714 | orchestrator | │ │ │ │ │ │ │ 'domain': {'name': 'default'} │ │ 2025-11-11 00:59:56.293724 | orchestrator | │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.293735 | orchestrator | │ │ │ │ │ } │ │ 2025-11-11 00:59:56.293747 | orchestrator | │ │ │ │ } │ │ 2025-11-11 00:59:56.293757 | orchestrator | │ │ │ }, │ │ 2025-11-11 00:59:56.293768 | orchestrator | │ │ │ 'headers': {'Accept': 'application/json'}, │ │ 2025-11-11 00:59:56.293779 | orchestrator | │ │ │ 'authenticated': False, │ │ 2025-11-11 00:59:56.293789 | orchestrator | │ │ │ 'log': False │ │ 2025-11-11 00:59:56.293800 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.293811 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.293821 | orchestrator | │ │ url = 'https://api.testbed.osism.xyz:5000/v3/auth/tokens' │ │ 2025-11-11 00:59:56.293833 | orchestrator | │ ╰───────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.293846 | orchestrator | │ │ 2025-11-11 00:59:56.293857 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/session.py:1049 in │ 2025-11-11 00:59:56.293868 | orchestrator | │ request │ 2025-11-11 00:59:56.293878 | orchestrator | │ │ 2025-11-11 00:59:56.293896 | orchestrator | │ 1046 │ │ │ if connection_params: │ 2025-11-11 00:59:56.293906 | orchestrator | │ 1047 │ │ │ │ kwargs.update(connection_params) │ 2025-11-11 00:59:56.293917 | orchestrator | │ 1048 │ │ │ 2025-11-11 00:59:56.293928 | orchestrator | │ ❱ 1049 │ │ resp = send(**kwargs) │ 2025-11-11 00:59:56.293938 | orchestrator | │ 1050 │ │ │ 2025-11-11 00:59:56.293956 | orchestrator | │ 1051 │ │ # log callee and caller request-id for each api call │ 2025-11-11 00:59:56.325013 | orchestrator | │ 1052 │ │ if log: │ 2025-11-11 00:59:56.325098 | orchestrator | │ │ 2025-11-11 00:59:56.325113 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.325125 | orchestrator | │ │ additional = ('openstacksdk', '4.7.1') │ │ 2025-11-11 00:59:56.325136 | orchestrator | │ │ agent = [ │ │ 2025-11-11 00:59:56.325147 | orchestrator | │ │ │ 'openstacksdk/4.7.1', │ │ 2025-11-11 00:59:56.325158 | orchestrator | │ │ │ 'keystoneauth1/5.12.0 │ │ 2025-11-11 00:59:56.325168 | orchestrator | │ │ python-requests/2.32.5 CPython/3.13.3' │ │ 2025-11-11 00:59:56.325179 | orchestrator | │ │ ] │ │ 2025-11-11 00:59:56.325189 | orchestrator | │ │ allow = None │ │ 2025-11-11 00:59:56.325200 | orchestrator | │ │ allow_reauth = True │ │ 2025-11-11 00:59:56.325210 | orchestrator | │ │ auth = None │ │ 2025-11-11 00:59:56.325221 | orchestrator | │ │ authenticated = False │ │ 2025-11-11 00:59:56.325232 | orchestrator | │ │ client_name = None │ │ 2025-11-11 00:59:56.325242 | orchestrator | │ │ client_version = None │ │ 2025-11-11 00:59:56.325253 | orchestrator | │ │ connect_retries = 0 │ │ 2025-11-11 00:59:56.325264 | orchestrator | │ │ connect_retry_delay = None │ │ 2025-11-11 00:59:56.325274 | orchestrator | │ │ connect_retry_delays = │ │ 2025-11-11 00:59:56.325295 | orchestrator | │ │ connection_params = {} │ │ 2025-11-11 00:59:56.325306 | orchestrator | │ │ endpoint_filter = None │ │ 2025-11-11 00:59:56.325317 | orchestrator | │ │ endpoint_override = None │ │ 2025-11-11 00:59:56.325327 | orchestrator | │ │ global_request_id = None │ │ 2025-11-11 00:59:56.325338 | orchestrator | │ │ headers = { │ │ 2025-11-11 00:59:56.325349 | orchestrator | │ │ │ 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.325359 | orchestrator | │ │ │ 'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.325370 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.325381 | orchestrator | │ │ CPython/3.13.3', │ │ 2025-11-11 00:59:56.325408 | orchestrator | │ │ │ 'Content-Type': 'application/json' │ │ 2025-11-11 00:59:56.325419 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.325430 | orchestrator | │ │ json = { │ │ 2025-11-11 00:59:56.325440 | orchestrator | │ │ │ 'auth': { │ │ 2025-11-11 00:59:56.325451 | orchestrator | │ │ │ │ 'identity': { │ │ 2025-11-11 00:59:56.325462 | orchestrator | │ │ │ │ │ 'methods': ['password'], │ │ 2025-11-11 00:59:56.325473 | orchestrator | │ │ │ │ │ 'password': { │ │ 2025-11-11 00:59:56.325484 | orchestrator | │ │ │ │ │ │ 'user': { │ │ 2025-11-11 00:59:56.325495 | orchestrator | │ │ │ │ │ │ │ 'password': 'password', │ │ 2025-11-11 00:59:56.325506 | orchestrator | │ │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.325516 | orchestrator | │ │ │ │ │ │ │ 'domain': { │ │ 2025-11-11 00:59:56.325527 | orchestrator | │ │ │ │ │ │ │ │ 'name': 'default' │ │ 2025-11-11 00:59:56.325558 | orchestrator | │ │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.325569 | orchestrator | │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.325580 | orchestrator | │ │ │ │ │ } │ │ 2025-11-11 00:59:56.325612 | orchestrator | │ │ │ │ }, │ │ 2025-11-11 00:59:56.325624 | orchestrator | │ │ │ │ 'scope': { │ │ 2025-11-11 00:59:56.325635 | orchestrator | │ │ │ │ │ 'project': { │ │ 2025-11-11 00:59:56.325646 | orchestrator | │ │ │ │ │ │ 'name': 'admin', │ │ 2025-11-11 00:59:56.325657 | orchestrator | │ │ │ │ │ │ 'domain': { │ │ 2025-11-11 00:59:56.325667 | orchestrator | │ │ │ │ │ │ │ 'name': 'default' │ │ 2025-11-11 00:59:56.325678 | orchestrator | │ │ │ │ │ │ } │ │ 2025-11-11 00:59:56.325689 | orchestrator | │ │ │ │ │ } │ │ 2025-11-11 00:59:56.325700 | orchestrator | │ │ │ │ } │ │ 2025-11-11 00:59:56.325710 | orchestrator | │ │ │ } │ │ 2025-11-11 00:59:56.325721 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.325732 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.325743 | orchestrator | │ │ │ 'headers': { │ │ 2025-11-11 00:59:56.325754 | orchestrator | │ │ │ │ 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.325764 | orchestrator | │ │ │ │ 'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.325775 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.325786 | orchestrator | │ │ CPython/3.13.3', │ │ 2025-11-11 00:59:56.325796 | orchestrator | │ │ │ │ 'Content-Type': 'application/json' │ │ 2025-11-11 00:59:56.325813 | orchestrator | │ │ │ }, │ │ 2025-11-11 00:59:56.325824 | orchestrator | │ │ │ 'data': '{"auth": {"identity": │ │ 2025-11-11 00:59:56.325835 | orchestrator | │ │ {"methods": ["password"], "password": │ │ 2025-11-11 00:59:56.325846 | orchestrator | │ │ {"user": {"password"'+137, │ │ 2025-11-11 00:59:56.325857 | orchestrator | │ │ │ 'verify': │ │ 2025-11-11 00:59:56.325868 | orchestrator | │ │ '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.325879 | orchestrator | │ │ │ 'allow_redirects': False │ │ 2025-11-11 00:59:56.325890 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.325901 | orchestrator | │ │ log = False │ │ 2025-11-11 00:59:56.325912 | orchestrator | │ │ logger = │ │ 2025-11-11 00:59:56.325923 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.325934 | orchestrator | │ │ microversion = None │ │ 2025-11-11 00:59:56.325944 | orchestrator | │ │ microversion_service_type = None │ │ 2025-11-11 00:59:56.325955 | orchestrator | │ │ original_ip = None │ │ 2025-11-11 00:59:56.325966 | orchestrator | │ │ query_params = {} │ │ 2025-11-11 00:59:56.325977 | orchestrator | │ │ raise_exc = True │ │ 2025-11-11 00:59:56.325988 | orchestrator | │ │ rate_semaphore = │ │ 2025-11-11 00:59:56.326009 | orchestrator | │ │ redirect = 30 │ │ 2025-11-11 00:59:56.326084 | orchestrator | │ │ requests_auth = None │ │ 2025-11-11 00:59:56.326105 | orchestrator | │ │ retriable_status_codes = [503] │ │ 2025-11-11 00:59:56.326127 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.326161 | orchestrator | │ │ send = functools.partial(>, │ │ 2025-11-11 00:59:56.326210 | orchestrator | │ │ 'https://api.testbed.osism.xyz:5000/v3/auth… │ │ 2025-11-11 00:59:56.347144 | orchestrator | │ │ 'POST', 30, False, , None, 0, 0, │ │ 2025-11-11 00:59:56.347231 | orchestrator | │ │ [503], , │ │ 2025-11-11 00:59:56.347253 | orchestrator | │ │ , │ │ 2025-11-11 00:59:56.347274 | orchestrator | │ │ ) │ │ 2025-11-11 00:59:56.347313 | orchestrator | │ │ split_loggers = None │ │ 2025-11-11 00:59:56.347324 | orchestrator | │ │ status_code_retries = 0 │ │ 2025-11-11 00:59:56.347334 | orchestrator | │ │ status_code_retry_delay = None │ │ 2025-11-11 00:59:56.347345 | orchestrator | │ │ status_code_retry_delays = │ │ 2025-11-11 00:59:56.347366 | orchestrator | │ │ url = 'https://api.testbed.osism.xyz:5000/v3/auth… │ │ 2025-11-11 00:59:56.347377 | orchestrator | │ │ user_agent = 'openstacksdk/4.7.1 keystoneauth1/5.12.0 │ │ 2025-11-11 00:59:56.347387 | orchestrator | │ │ python-requests/2.32.5 CPython/3.13.3' │ │ 2025-11-11 00:59:56.347399 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.347411 | orchestrator | │ │ 2025-11-11 00:59:56.347422 | orchestrator | │ /usr/local/lib/python3.13/site-packages/keystoneauth1/session.py:1176 in │ 2025-11-11 00:59:56.347472 | orchestrator | │ _send_request │ 2025-11-11 00:59:56.347484 | orchestrator | │ │ 2025-11-11 00:59:56.347495 | orchestrator | │ 1173 │ │ │ │ # out the difference between network misconfiguration │ 2025-11-11 00:59:56.347506 | orchestrator | │ 1174 │ │ │ │ # and firewall blocking. │ 2025-11-11 00:59:56.347516 | orchestrator | │ 1175 │ │ │ │ msg = f'Unable to establish connection to {url}: {e}' │ 2025-11-11 00:59:56.347527 | orchestrator | │ ❱ 1176 │ │ │ │ raise exceptions.ConnectFailure(msg) │ 2025-11-11 00:59:56.347559 | orchestrator | │ 1177 │ │ │ except requests.exceptions.RequestException as e: │ 2025-11-11 00:59:56.347570 | orchestrator | │ 1178 │ │ │ │ msg = f'Unexpected exception for {url}: {e}' │ 2025-11-11 00:59:56.347581 | orchestrator | │ 1179 │ │ │ │ raise exceptions.UnknownConnectionError(msg, e) │ 2025-11-11 00:59:56.347592 | orchestrator | │ │ 2025-11-11 00:59:56.347604 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-11-11 00:59:56.347617 | orchestrator | │ │ connect_retries = 0 │ │ 2025-11-11 00:59:56.347627 | orchestrator | │ │ connect_retry_delays = │ │ 2025-11-11 00:59:56.347649 | orchestrator | │ │ kwargs = { │ │ 2025-11-11 00:59:56.347660 | orchestrator | │ │ │ 'headers': { │ │ 2025-11-11 00:59:56.347671 | orchestrator | │ │ │ │ 'Accept': 'application/json', │ │ 2025-11-11 00:59:56.347682 | orchestrator | │ │ │ │ 'User-Agent': 'openstacksdk/4.7.1 │ │ 2025-11-11 00:59:56.347693 | orchestrator | │ │ keystoneauth1/5.12.0 python-requests/2.32.5 │ │ 2025-11-11 00:59:56.347703 | orchestrator | │ │ CPython/3.13.3', │ │ 2025-11-11 00:59:56.347721 | orchestrator | │ │ │ │ 'Content-Type': 'application/json' │ │ 2025-11-11 00:59:56.347732 | orchestrator | │ │ │ }, │ │ 2025-11-11 00:59:56.347743 | orchestrator | │ │ │ 'data': '{"auth": {"identity": │ │ 2025-11-11 00:59:56.347773 | orchestrator | │ │ {"methods": ["password"], "password": │ │ 2025-11-11 00:59:56.347785 | orchestrator | │ │ {"user": {"password"'+137, │ │ 2025-11-11 00:59:56.347796 | orchestrator | │ │ │ 'verify': │ │ 2025-11-11 00:59:56.347807 | orchestrator | │ │ '/etc/ssl/certs/ca-certificates.crt', │ │ 2025-11-11 00:59:56.347818 | orchestrator | │ │ │ 'allow_redirects': False │ │ 2025-11-11 00:59:56.347828 | orchestrator | │ │ } │ │ 2025-11-11 00:59:56.347839 | orchestrator | │ │ log = False │ │ 2025-11-11 00:59:56.347850 | orchestrator | │ │ logger = │ │ 2025-11-11 00:59:56.347861 | orchestrator | │ │ method = 'POST' │ │ 2025-11-11 00:59:56.347872 | orchestrator | │ │ msg = 'Unable to establish connection to │ │ 2025-11-11 00:59:56.347883 | orchestrator | │ │ https://api.testbed.osism.xyz:5000/v3/auth/t… │ │ 2025-11-11 00:59:56.347894 | orchestrator | │ │ rate_semaphore = │ │ 2025-11-11 00:59:56.347916 | orchestrator | │ │ redirect = 30 │ │ 2025-11-11 00:59:56.347926 | orchestrator | │ │ retriable_status_codes = [503] │ │ 2025-11-11 00:59:56.347937 | orchestrator | │ │ self = │ │ 2025-11-11 00:59:56.347959 | orchestrator | │ │ split_loggers = None │ │ 2025-11-11 00:59:56.347970 | orchestrator | │ │ status_code_retries = 0 │ │ 2025-11-11 00:59:56.347981 | orchestrator | │ │ status_code_retry_delays = │ │ 2025-11-11 00:59:56.348003 | orchestrator | │ │ url = 'https://api.testbed.osism.xyz:5000/v3/auth/… │ │ 2025-11-11 00:59:56.348014 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-11-11 00:59:56.348026 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-11-11 00:59:56.348037 | orchestrator | ConnectFailure: Unable to establish connection to 2025-11-11 00:59:56.348085 | orchestrator | https://api.testbed.osism.xyz:5000/v3/auth/tokens: 2025-11-11 00:59:56.348097 | orchestrator | HTTPSConnectionPool(host='api.testbed.osism.xyz', port=5000): Max retries 2025-11-11 00:59:56.348109 | orchestrator | exceeded with url: /v3/auth/tokens (Caused by 2025-11-11 00:59:56.348120 | orchestrator | NewConnectionError(': Failed to establish a new connection: [Errno 113] Host is 2025-11-11 00:59:56.348151 | orchestrator | unreachable')) 2025-11-11 00:59:57.055601 | orchestrator | ERROR 2025-11-11 00:59:57.055825 | orchestrator | { 2025-11-11 00:59:57.055863 | orchestrator | "delta": "0:00:13.332251", 2025-11-11 00:59:57.055943 | orchestrator | "end": "2025-11-11 00:59:56.632568", 2025-11-11 00:59:57.055967 | orchestrator | "msg": "non-zero return code", 2025-11-11 00:59:57.055987 | orchestrator | "rc": 1, 2025-11-11 00:59:57.056006 | orchestrator | "start": "2025-11-11 00:59:43.300317" 2025-11-11 00:59:57.056025 | orchestrator | } failure 2025-11-11 00:59:57.092057 | 2025-11-11 00:59:57.092206 | PLAY RECAP 2025-11-11 00:59:57.092272 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-11-11 00:59:57.092304 | 2025-11-11 00:59:57.273773 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-11-11 00:59:57.274820 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-11 00:59:58.077724 | 2025-11-11 00:59:58.077881 | PLAY [Post output play] 2025-11-11 00:59:58.093612 | 2025-11-11 00:59:58.093740 | LOOP [stage-output : Register sources] 2025-11-11 00:59:58.155722 | 2025-11-11 00:59:58.157309 | TASK [stage-output : Check sudo] 2025-11-11 00:59:58.996771 | orchestrator | sudo: a password is required 2025-11-11 00:59:59.202008 | orchestrator | ok: Runtime: 0:00:00.015437 2025-11-11 00:59:59.220617 | 2025-11-11 00:59:59.220851 | LOOP [stage-output : Set source and destination for files and folders] 2025-11-11 00:59:59.257641 | 2025-11-11 00:59:59.257922 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-11-11 00:59:59.325834 | orchestrator | ok 2025-11-11 00:59:59.333974 | 2025-11-11 00:59:59.334116 | LOOP [stage-output : Ensure target folders exist] 2025-11-11 00:59:59.765144 | orchestrator | ok: "docs" 2025-11-11 00:59:59.765537 | 2025-11-11 00:59:59.986057 | orchestrator | ok: "artifacts" 2025-11-11 01:00:00.194338 | orchestrator | ok: "logs" 2025-11-11 01:00:00.214481 | 2025-11-11 01:00:00.214652 | LOOP [stage-output : Copy files and folders to staging folder] 2025-11-11 01:00:00.257879 | 2025-11-11 01:00:00.258161 | TASK [stage-output : Make all log files readable] 2025-11-11 01:00:00.533419 | orchestrator | ok 2025-11-11 01:00:00.543494 | 2025-11-11 01:00:00.543638 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-11-11 01:00:00.578317 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:00.592101 | 2025-11-11 01:00:00.592276 | TASK [stage-output : Discover log files for compression] 2025-11-11 01:00:00.617571 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:00.628822 | 2025-11-11 01:00:00.628967 | LOOP [stage-output : Archive everything from logs] 2025-11-11 01:00:00.669808 | 2025-11-11 01:00:00.670007 | PLAY [Post cleanup play] 2025-11-11 01:00:00.679309 | 2025-11-11 01:00:00.679423 | TASK [Set cloud fact (Zuul deployment)] 2025-11-11 01:00:00.732485 | orchestrator | ok 2025-11-11 01:00:00.743354 | 2025-11-11 01:00:00.743486 | TASK [Set cloud fact (local deployment)] 2025-11-11 01:00:00.777556 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:00.788610 | 2025-11-11 01:00:00.788757 | TASK [Clean the cloud environment] 2025-11-11 01:00:01.417186 | orchestrator | 2025-11-11 01:00:01 - clean up servers 2025-11-11 01:00:02.226479 | orchestrator | 2025-11-11 01:00:02 - testbed-manager 2025-11-11 01:00:02.318217 | orchestrator | 2025-11-11 01:00:02 - testbed-node-0 2025-11-11 01:00:02.406456 | orchestrator | 2025-11-11 01:00:02 - testbed-node-4 2025-11-11 01:00:02.518801 | orchestrator | 2025-11-11 01:00:02 - testbed-node-1 2025-11-11 01:00:02.605726 | orchestrator | 2025-11-11 01:00:02 - testbed-node-3 2025-11-11 01:00:02.691350 | orchestrator | 2025-11-11 01:00:02 - testbed-node-2 2025-11-11 01:00:02.793316 | orchestrator | 2025-11-11 01:00:02 - testbed-node-5 2025-11-11 01:00:02.905580 | orchestrator | 2025-11-11 01:00:02 - clean up keypairs 2025-11-11 01:00:02.928265 | orchestrator | 2025-11-11 01:00:02 - testbed 2025-11-11 01:00:02.953273 | orchestrator | 2025-11-11 01:00:02 - wait for servers to be gone 2025-11-11 01:00:11.798961 | orchestrator | 2025-11-11 01:00:11 - clean up ports 2025-11-11 01:00:12.038102 | orchestrator | 2025-11-11 01:00:12 - 046891ca-0b5c-40aa-b354-5b0e5c7203aa 2025-11-11 01:00:12.333158 | orchestrator | 2025-11-11 01:00:12 - 4a3981e2-f0be-42b6-8db3-1c199c343817 2025-11-11 01:00:12.625882 | orchestrator | 2025-11-11 01:00:12 - 8dcf13b9-9617-4f95-be41-aea070639f0e 2025-11-11 01:00:13.049748 | orchestrator | 2025-11-11 01:00:13 - 8e156d3c-5b80-4fbd-ad5c-5330981c24e0 2025-11-11 01:00:13.277573 | orchestrator | 2025-11-11 01:00:13 - c81b3325-afc5-4c8d-a3cb-d2e3d5168dc8 2025-11-11 01:00:13.673072 | orchestrator | 2025-11-11 01:00:13 - cbbb3b78-0c4f-4c9d-8646-ccb4d26dbeb0 2025-11-11 01:00:13.876905 | orchestrator | 2025-11-11 01:00:13 - d9417825-210b-4296-b532-140164fd716b 2025-11-11 01:00:14.091798 | orchestrator | 2025-11-11 01:00:14 - clean up volumes 2025-11-11 01:00:14.198134 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-4-node-base 2025-11-11 01:00:14.239908 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-3-node-base 2025-11-11 01:00:14.279377 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-5-node-base 2025-11-11 01:00:14.328601 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-2-node-base 2025-11-11 01:00:14.368793 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-0-node-base 2025-11-11 01:00:14.409521 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-1-node-base 2025-11-11 01:00:14.451370 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-manager-base 2025-11-11 01:00:14.496629 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-0-node-3 2025-11-11 01:00:14.538351 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-4-node-4 2025-11-11 01:00:14.580677 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-7-node-4 2025-11-11 01:00:14.622858 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-2-node-5 2025-11-11 01:00:14.663717 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-3-node-3 2025-11-11 01:00:14.707715 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-5-node-5 2025-11-11 01:00:14.752737 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-6-node-3 2025-11-11 01:00:14.795633 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-8-node-5 2025-11-11 01:00:14.835214 | orchestrator | 2025-11-11 01:00:14 - testbed-volume-1-node-4 2025-11-11 01:00:14.876046 | orchestrator | 2025-11-11 01:00:14 - disconnect routers 2025-11-11 01:00:14.990298 | orchestrator | 2025-11-11 01:00:14 - testbed 2025-11-11 01:00:16.066235 | orchestrator | 2025-11-11 01:00:16 - clean up subnets 2025-11-11 01:00:16.122589 | orchestrator | 2025-11-11 01:00:16 - subnet-testbed-management 2025-11-11 01:00:16.291391 | orchestrator | 2025-11-11 01:00:16 - clean up networks 2025-11-11 01:00:16.497844 | orchestrator | 2025-11-11 01:00:16 - net-testbed-management 2025-11-11 01:00:16.800375 | orchestrator | 2025-11-11 01:00:16 - clean up security groups 2025-11-11 01:00:16.851083 | orchestrator | 2025-11-11 01:00:16 - testbed-node 2025-11-11 01:00:16.994319 | orchestrator | 2025-11-11 01:00:16 - testbed-management 2025-11-11 01:00:17.110976 | orchestrator | 2025-11-11 01:00:17 - clean up floating ips 2025-11-11 01:00:17.158106 | orchestrator | 2025-11-11 01:00:17 - 81.163.192.227 2025-11-11 01:00:17.595954 | orchestrator | 2025-11-11 01:00:17 - clean up routers 2025-11-11 01:00:17.707136 | orchestrator | 2025-11-11 01:00:17 - testbed 2025-11-11 01:00:18.846635 | orchestrator | ok: Runtime: 0:00:17.502288 2025-11-11 01:00:18.848585 | 2025-11-11 01:00:18.848681 | PLAY RECAP 2025-11-11 01:00:18.848739 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-11-11 01:00:18.848765 | 2025-11-11 01:00:19.003887 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-11 01:00:19.004886 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-11 01:00:19.804214 | 2025-11-11 01:00:19.804454 | PLAY [Cleanup play] 2025-11-11 01:00:19.820404 | 2025-11-11 01:00:19.820546 | TASK [Set cloud fact (Zuul deployment)] 2025-11-11 01:00:19.860727 | orchestrator | ok 2025-11-11 01:00:19.867653 | 2025-11-11 01:00:19.867805 | TASK [Set cloud fact (local deployment)] 2025-11-11 01:00:19.891794 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:19.900190 | 2025-11-11 01:00:19.900364 | TASK [Clean the cloud environment] 2025-11-11 01:00:21.256473 | orchestrator | 2025-11-11 01:00:21 - clean up servers 2025-11-11 01:00:21.816800 | orchestrator | 2025-11-11 01:00:21 - clean up keypairs 2025-11-11 01:00:21.830585 | orchestrator | 2025-11-11 01:00:21 - wait for servers to be gone 2025-11-11 01:00:21.871781 | orchestrator | 2025-11-11 01:00:21 - clean up ports 2025-11-11 01:00:21.948632 | orchestrator | 2025-11-11 01:00:21 - clean up volumes 2025-11-11 01:00:22.010227 | orchestrator | 2025-11-11 01:00:22 - disconnect routers 2025-11-11 01:00:22.043338 | orchestrator | 2025-11-11 01:00:22 - clean up subnets 2025-11-11 01:00:22.069656 | orchestrator | 2025-11-11 01:00:22 - clean up networks 2025-11-11 01:00:22.224152 | orchestrator | 2025-11-11 01:00:22 - clean up security groups 2025-11-11 01:00:22.260711 | orchestrator | 2025-11-11 01:00:22 - clean up floating ips 2025-11-11 01:00:22.288622 | orchestrator | 2025-11-11 01:00:22 - clean up routers 2025-11-11 01:00:22.482699 | orchestrator | ok: Runtime: 0:00:01.489349 2025-11-11 01:00:22.486076 | 2025-11-11 01:00:22.486231 | PLAY RECAP 2025-11-11 01:00:22.486379 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-11-11 01:00:22.486441 | 2025-11-11 01:00:22.648690 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-11 01:00:22.651126 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-11 01:00:23.532308 | 2025-11-11 01:00:23.532483 | PLAY [Base post-fetch] 2025-11-11 01:00:23.572486 | 2025-11-11 01:00:23.572654 | TASK [fetch-output : Set log path for multiple nodes] 2025-11-11 01:00:23.643234 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:23.650188 | 2025-11-11 01:00:23.650368 | TASK [fetch-output : Set log path for single node] 2025-11-11 01:00:23.681160 | orchestrator | ok 2025-11-11 01:00:23.687387 | 2025-11-11 01:00:23.687499 | LOOP [fetch-output : Ensure local output dirs] 2025-11-11 01:00:24.183103 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/logs" 2025-11-11 01:00:24.476364 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/artifacts" 2025-11-11 01:00:24.835207 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4bdd915bb0514d86bdfa070d35e992a8/work/docs" 2025-11-11 01:00:24.859522 | 2025-11-11 01:00:24.859665 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-11-11 01:00:26.020498 | orchestrator | changed: .d..t...... ./ 2025-11-11 01:00:26.020958 | orchestrator | changed: All items complete 2025-11-11 01:00:26.022042 | 2025-11-11 01:00:26.805373 | orchestrator | changed: .d..t...... ./ 2025-11-11 01:00:27.512633 | orchestrator | changed: .d..t...... ./ 2025-11-11 01:00:27.543712 | 2025-11-11 01:00:27.543853 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-11-11 01:00:27.568633 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:27.572664 | orchestrator | skipping: Conditional result was False 2025-11-11 01:00:27.586417 | 2025-11-11 01:00:27.586509 | PLAY RECAP 2025-11-11 01:00:27.586560 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-11-11 01:00:27.586587 | 2025-11-11 01:00:27.743656 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-11 01:00:27.744700 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-11 01:00:28.512045 | 2025-11-11 01:00:28.512195 | PLAY [Base post] 2025-11-11 01:00:28.526360 | 2025-11-11 01:00:28.526488 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-11-11 01:00:29.729453 | orchestrator | changed 2025-11-11 01:00:29.740701 | 2025-11-11 01:00:29.740862 | PLAY RECAP 2025-11-11 01:00:29.740941 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-11-11 01:00:29.741017 | 2025-11-11 01:00:29.866068 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-11 01:00:29.867151 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-11-11 01:00:30.665921 | 2025-11-11 01:00:30.666084 | PLAY [Base post-logs] 2025-11-11 01:00:30.676672 | 2025-11-11 01:00:30.676800 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-11-11 01:00:31.127949 | localhost | changed 2025-11-11 01:00:31.138174 | 2025-11-11 01:00:31.138334 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-11-11 01:00:31.174504 | localhost | ok 2025-11-11 01:00:31.178499 | 2025-11-11 01:00:31.178614 | TASK [Set zuul-log-path fact] 2025-11-11 01:00:31.193792 | localhost | ok 2025-11-11 01:00:31.202633 | 2025-11-11 01:00:31.202741 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-11 01:00:31.227638 | localhost | ok 2025-11-11 01:00:31.231521 | 2025-11-11 01:00:31.231643 | TASK [upload-logs : Create log directories] 2025-11-11 01:00:31.758939 | localhost | changed 2025-11-11 01:00:31.763558 | 2025-11-11 01:00:31.763708 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-11-11 01:00:32.254496 | localhost -> localhost | ok: Runtime: 0:00:00.007367 2025-11-11 01:00:32.263226 | 2025-11-11 01:00:32.263522 | TASK [upload-logs : Upload logs to log server] 2025-11-11 01:00:32.856485 | localhost | Output suppressed because no_log was given 2025-11-11 01:00:32.858392 | 2025-11-11 01:00:32.858503 | LOOP [upload-logs : Compress console log and json output] 2025-11-11 01:00:32.910919 | localhost | skipping: Conditional result was False 2025-11-11 01:00:32.917803 | localhost | skipping: Conditional result was False 2025-11-11 01:00:32.926491 | 2025-11-11 01:00:32.926640 | LOOP [upload-logs : Upload compressed console log and json output] 2025-11-11 01:00:33.003931 | localhost | skipping: Conditional result was False 2025-11-11 01:00:33.004295 | 2025-11-11 01:00:33.008869 | localhost | skipping: Conditional result was False 2025-11-11 01:00:33.012926 | 2025-11-11 01:00:33.013033 | LOOP [upload-logs : Upload console log and json output]